00:00:00.001 Started by upstream project "autotest-per-patch" build number 127174 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.094 The recommended git tool is: git 00:00:00.095 using credential 00000000-0000-0000-0000-000000000002 00:00:00.096 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.146 Fetching changes from the remote Git repository 00:00:00.148 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.202 Using shallow fetch with depth 1 00:00:00.202 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.202 > git --version # timeout=10 00:00:00.240 > git --version # 'git version 2.39.2' 00:00:00.240 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.257 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.257 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.347 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.358 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.371 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:05.371 > git config core.sparsecheckout # timeout=10 00:00:05.382 > git read-tree -mu HEAD # timeout=10 00:00:05.398 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:05.421 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:05.421 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:05.509 [Pipeline] Start of Pipeline 00:00:05.522 [Pipeline] library 00:00:05.523 Loading library shm_lib@master 00:00:05.524 Library shm_lib@master is cached. Copying from home. 00:00:05.536 [Pipeline] node 00:00:05.543 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:05.545 [Pipeline] { 00:00:05.552 [Pipeline] catchError 00:00:05.553 [Pipeline] { 00:00:05.563 [Pipeline] wrap 00:00:05.570 [Pipeline] { 00:00:05.576 [Pipeline] stage 00:00:05.577 [Pipeline] { (Prologue) 00:00:05.590 [Pipeline] echo 00:00:05.591 Node: VM-host-SM0 00:00:05.595 [Pipeline] cleanWs 00:00:05.603 [WS-CLEANUP] Deleting project workspace... 00:00:05.603 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.608 [WS-CLEANUP] done 00:00:05.795 [Pipeline] setCustomBuildProperty 00:00:05.889 [Pipeline] httpRequest 00:00:05.917 [Pipeline] echo 00:00:05.918 Sorcerer 10.211.164.101 is alive 00:00:05.926 [Pipeline] httpRequest 00:00:05.991 HttpMethod: GET 00:00:05.992 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:05.992 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:05.993 Response Code: HTTP/1.1 200 OK 00:00:05.993 Success: Status code 200 is in the accepted range: 200,404 00:00:05.994 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:07.040 [Pipeline] sh 00:00:07.320 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:07.335 [Pipeline] httpRequest 00:00:07.361 [Pipeline] echo 00:00:07.362 Sorcerer 10.211.164.101 is alive 00:00:07.371 [Pipeline] httpRequest 00:00:07.375 HttpMethod: GET 00:00:07.375 URL: http://10.211.164.101/packages/spdk_5038b8b39aaa930855f0c6fc4c80fd3894ac185d.tar.gz 00:00:07.376 Sending request to url: http://10.211.164.101/packages/spdk_5038b8b39aaa930855f0c6fc4c80fd3894ac185d.tar.gz 00:00:07.394 Response Code: HTTP/1.1 200 OK 00:00:07.394 Success: Status code 200 is in the accepted range: 200,404 00:00:07.395 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_5038b8b39aaa930855f0c6fc4c80fd3894ac185d.tar.gz 00:01:04.545 [Pipeline] sh 00:01:04.892 + tar --no-same-owner -xf spdk_5038b8b39aaa930855f0c6fc4c80fd3894ac185d.tar.gz 00:01:08.190 [Pipeline] sh 00:01:08.473 + git -C spdk log --oneline -n5 00:01:08.473 5038b8b39 lib/idxd: add descriptors for DIX generate 00:01:08.473 e98d44db1 lib/accel: add spdk_accel_append_dix_generate/verify 00:01:08.473 325310f6a accel_perf: add support for DIX Generate/Verify 00:01:08.473 fcdc45f1b test/accel/dif: add DIX Generate/Verify suites 00:01:08.473 ae7704717 lib/accel: add DIX verify 00:01:08.492 [Pipeline] writeFile 00:01:08.511 [Pipeline] sh 00:01:08.792 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:08.802 [Pipeline] sh 00:01:09.078 + cat autorun-spdk.conf 00:01:09.078 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.078 SPDK_TEST_NVMF=1 00:01:09.078 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.078 SPDK_TEST_USDT=1 00:01:09.078 SPDK_TEST_NVMF_MDNS=1 00:01:09.078 SPDK_RUN_UBSAN=1 00:01:09.078 NET_TYPE=virt 00:01:09.078 SPDK_JSONRPC_GO_CLIENT=1 00:01:09.078 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.085 RUN_NIGHTLY=0 00:01:09.086 [Pipeline] } 00:01:09.102 [Pipeline] // stage 00:01:09.117 [Pipeline] stage 00:01:09.119 [Pipeline] { (Run VM) 00:01:09.134 [Pipeline] sh 00:01:09.456 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:09.456 + echo 'Start stage prepare_nvme.sh' 00:01:09.456 Start stage prepare_nvme.sh 00:01:09.456 + [[ -n 4 ]] 00:01:09.456 + disk_prefix=ex4 00:01:09.456 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:09.456 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:09.456 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:09.456 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.456 ++ SPDK_TEST_NVMF=1 00:01:09.456 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.456 ++ SPDK_TEST_USDT=1 00:01:09.456 ++ SPDK_TEST_NVMF_MDNS=1 00:01:09.456 ++ SPDK_RUN_UBSAN=1 00:01:09.456 ++ NET_TYPE=virt 00:01:09.456 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:09.456 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.456 ++ RUN_NIGHTLY=0 00:01:09.456 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:09.456 + nvme_files=() 00:01:09.456 + declare -A nvme_files 00:01:09.456 + backend_dir=/var/lib/libvirt/images/backends 00:01:09.456 + nvme_files['nvme.img']=5G 00:01:09.456 + nvme_files['nvme-cmb.img']=5G 00:01:09.456 + nvme_files['nvme-multi0.img']=4G 00:01:09.456 + nvme_files['nvme-multi1.img']=4G 00:01:09.456 + nvme_files['nvme-multi2.img']=4G 00:01:09.456 + nvme_files['nvme-openstack.img']=8G 00:01:09.456 + nvme_files['nvme-zns.img']=5G 00:01:09.456 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:09.456 + (( SPDK_TEST_FTL == 1 )) 00:01:09.456 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:09.456 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:09.457 + for nvme in "${!nvme_files[@]}" 00:01:09.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:09.457 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.457 + for nvme in "${!nvme_files[@]}" 00:01:09.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:09.457 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.457 + for nvme in "${!nvme_files[@]}" 00:01:09.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:09.457 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:09.457 + for nvme in "${!nvme_files[@]}" 00:01:09.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:09.457 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.457 + for nvme in "${!nvme_files[@]}" 00:01:09.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:09.457 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.457 + for nvme in "${!nvme_files[@]}" 00:01:09.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:09.457 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.457 + for nvme in "${!nvme_files[@]}" 00:01:09.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:09.715 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.715 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:09.715 + echo 'End stage prepare_nvme.sh' 00:01:09.715 End stage prepare_nvme.sh 00:01:09.727 [Pipeline] sh 00:01:10.007 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:10.007 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:01:10.007 00:01:10.007 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:10.007 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:10.007 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:10.007 HELP=0 00:01:10.007 DRY_RUN=0 00:01:10.007 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:10.007 NVME_DISKS_TYPE=nvme,nvme, 00:01:10.007 NVME_AUTO_CREATE=0 00:01:10.007 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:10.007 NVME_CMB=,, 00:01:10.007 NVME_PMR=,, 00:01:10.007 NVME_ZNS=,, 00:01:10.007 NVME_MS=,, 00:01:10.007 NVME_FDP=,, 00:01:10.007 SPDK_VAGRANT_DISTRO=fedora38 00:01:10.007 SPDK_VAGRANT_VMCPU=10 00:01:10.007 SPDK_VAGRANT_VMRAM=12288 00:01:10.007 SPDK_VAGRANT_PROVIDER=libvirt 00:01:10.007 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:10.007 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:10.007 SPDK_OPENSTACK_NETWORK=0 00:01:10.007 VAGRANT_PACKAGE_BOX=0 00:01:10.007 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:10.007 FORCE_DISTRO=true 00:01:10.007 VAGRANT_BOX_VERSION= 00:01:10.007 EXTRA_VAGRANTFILES= 00:01:10.007 NIC_MODEL=e1000 00:01:10.007 00:01:10.007 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:10.007 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:13.335 Bringing machine 'default' up with 'libvirt' provider... 00:01:13.904 ==> default: Creating image (snapshot of base box volume). 00:01:14.162 ==> default: Creating domain with the following settings... 00:01:14.162 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721909308_c2ab3d029cf5496f0ccd 00:01:14.162 ==> default: -- Domain type: kvm 00:01:14.162 ==> default: -- Cpus: 10 00:01:14.162 ==> default: -- Feature: acpi 00:01:14.162 ==> default: -- Feature: apic 00:01:14.162 ==> default: -- Feature: pae 00:01:14.162 ==> default: -- Memory: 12288M 00:01:14.162 ==> default: -- Memory Backing: hugepages: 00:01:14.162 ==> default: -- Management MAC: 00:01:14.162 ==> default: -- Loader: 00:01:14.162 ==> default: -- Nvram: 00:01:14.162 ==> default: -- Base box: spdk/fedora38 00:01:14.162 ==> default: -- Storage pool: default 00:01:14.162 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721909308_c2ab3d029cf5496f0ccd.img (20G) 00:01:14.162 ==> default: -- Volume Cache: default 00:01:14.162 ==> default: -- Kernel: 00:01:14.162 ==> default: -- Initrd: 00:01:14.162 ==> default: -- Graphics Type: vnc 00:01:14.162 ==> default: -- Graphics Port: -1 00:01:14.162 ==> default: -- Graphics IP: 127.0.0.1 00:01:14.162 ==> default: -- Graphics Password: Not defined 00:01:14.162 ==> default: -- Video Type: cirrus 00:01:14.162 ==> default: -- Video VRAM: 9216 00:01:14.162 ==> default: -- Sound Type: 00:01:14.162 ==> default: -- Keymap: en-us 00:01:14.162 ==> default: -- TPM Path: 00:01:14.162 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:14.162 ==> default: -- Command line args: 00:01:14.162 ==> default: -> value=-device, 00:01:14.162 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:14.162 ==> default: -> value=-drive, 00:01:14.162 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:14.162 ==> default: -> value=-device, 00:01:14.162 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.162 ==> default: -> value=-device, 00:01:14.162 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:14.162 ==> default: -> value=-drive, 00:01:14.162 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:14.162 ==> default: -> value=-device, 00:01:14.162 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.162 ==> default: -> value=-drive, 00:01:14.162 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:14.163 ==> default: -> value=-device, 00:01:14.163 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.163 ==> default: -> value=-drive, 00:01:14.163 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:14.163 ==> default: -> value=-device, 00:01:14.163 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.163 ==> default: Creating shared folders metadata... 00:01:14.163 ==> default: Starting domain. 00:01:16.698 ==> default: Waiting for domain to get an IP address... 00:01:38.619 ==> default: Waiting for SSH to become available... 00:01:38.619 ==> default: Configuring and enabling network interfaces... 00:01:40.528 default: SSH address: 192.168.121.28:22 00:01:40.528 default: SSH username: vagrant 00:01:40.528 default: SSH auth method: private key 00:01:43.092 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.224 ==> default: Mounting SSHFS shared folder... 00:01:51.790 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:51.790 ==> default: Checking Mount.. 00:01:53.185 ==> default: Folder Successfully Mounted! 00:01:53.185 ==> default: Running provisioner: file... 00:01:53.759 default: ~/.gitconfig => .gitconfig 00:01:54.018 00:01:54.018 SUCCESS! 00:01:54.018 00:01:54.018 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:54.018 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:54.018 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:54.018 00:01:54.028 [Pipeline] } 00:01:54.048 [Pipeline] // stage 00:01:54.058 [Pipeline] dir 00:01:54.059 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:54.061 [Pipeline] { 00:01:54.079 [Pipeline] catchError 00:01:54.082 [Pipeline] { 00:01:54.098 [Pipeline] sh 00:01:54.379 + vagrant ssh-config --host vagrant 00:01:54.379 + sed -ne /^Host/,$p 00:01:54.379 + tee ssh_conf 00:01:58.566 Host vagrant 00:01:58.566 HostName 192.168.121.28 00:01:58.566 User vagrant 00:01:58.566 Port 22 00:01:58.566 UserKnownHostsFile /dev/null 00:01:58.566 StrictHostKeyChecking no 00:01:58.566 PasswordAuthentication no 00:01:58.566 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:58.566 IdentitiesOnly yes 00:01:58.566 LogLevel FATAL 00:01:58.566 ForwardAgent yes 00:01:58.566 ForwardX11 yes 00:01:58.566 00:01:58.579 [Pipeline] withEnv 00:01:58.582 [Pipeline] { 00:01:58.600 [Pipeline] sh 00:01:58.881 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:58.881 source /etc/os-release 00:01:58.881 [[ -e /image.version ]] && img=$(< /image.version) 00:01:58.881 # Minimal, systemd-like check. 00:01:58.881 if [[ -e /.dockerenv ]]; then 00:01:58.881 # Clear garbage from the node's name: 00:01:58.881 # agt-er_autotest_547-896 -> autotest_547-896 00:01:58.881 # $HOSTNAME is the actual container id 00:01:58.881 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:58.881 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:58.881 # We can assume this is a mount from a host where container is running, 00:01:58.881 # so fetch its hostname to easily identify the target swarm worker. 00:01:58.881 container="$(< /etc/hostname) ($agent)" 00:01:58.881 else 00:01:58.881 # Fallback 00:01:58.881 container=$agent 00:01:58.881 fi 00:01:58.881 fi 00:01:58.881 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:58.881 00:01:58.892 [Pipeline] } 00:01:58.911 [Pipeline] // withEnv 00:01:58.919 [Pipeline] setCustomBuildProperty 00:01:58.936 [Pipeline] stage 00:01:58.938 [Pipeline] { (Tests) 00:01:58.955 [Pipeline] sh 00:01:59.234 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:59.506 [Pipeline] sh 00:01:59.786 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:00.054 [Pipeline] timeout 00:02:00.055 Timeout set to expire in 40 min 00:02:00.056 [Pipeline] { 00:02:00.066 [Pipeline] sh 00:02:00.337 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:00.901 HEAD is now at 5038b8b39 lib/idxd: add descriptors for DIX generate 00:02:00.913 [Pipeline] sh 00:02:01.194 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:01.464 [Pipeline] sh 00:02:01.742 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:02.016 [Pipeline] sh 00:02:02.295 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:02.295 ++ readlink -f spdk_repo 00:02:02.295 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:02.295 + [[ -n /home/vagrant/spdk_repo ]] 00:02:02.295 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:02.295 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:02.552 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:02.552 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:02.552 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:02.552 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:02.552 + cd /home/vagrant/spdk_repo 00:02:02.552 + source /etc/os-release 00:02:02.552 ++ NAME='Fedora Linux' 00:02:02.552 ++ VERSION='38 (Cloud Edition)' 00:02:02.552 ++ ID=fedora 00:02:02.552 ++ VERSION_ID=38 00:02:02.552 ++ VERSION_CODENAME= 00:02:02.552 ++ PLATFORM_ID=platform:f38 00:02:02.552 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:02.552 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:02.552 ++ LOGO=fedora-logo-icon 00:02:02.552 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:02.552 ++ HOME_URL=https://fedoraproject.org/ 00:02:02.552 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:02.552 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:02.552 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:02.552 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:02.552 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:02.552 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:02.552 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:02.552 ++ SUPPORT_END=2024-05-14 00:02:02.552 ++ VARIANT='Cloud Edition' 00:02:02.552 ++ VARIANT_ID=cloud 00:02:02.552 + uname -a 00:02:02.552 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:02.552 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:02.810 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:03.069 Hugepages 00:02:03.069 node hugesize free / total 00:02:03.069 node0 1048576kB 0 / 0 00:02:03.069 node0 2048kB 0 / 0 00:02:03.069 00:02:03.069 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.069 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.069 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:03.069 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:03.069 + rm -f /tmp/spdk-ld-path 00:02:03.069 + source autorun-spdk.conf 00:02:03.069 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.069 ++ SPDK_TEST_NVMF=1 00:02:03.069 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.069 ++ SPDK_TEST_USDT=1 00:02:03.069 ++ SPDK_TEST_NVMF_MDNS=1 00:02:03.069 ++ SPDK_RUN_UBSAN=1 00:02:03.069 ++ NET_TYPE=virt 00:02:03.069 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:03.069 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.069 ++ RUN_NIGHTLY=0 00:02:03.069 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:03.069 + [[ -n '' ]] 00:02:03.069 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:03.069 + for M in /var/spdk/build-*-manifest.txt 00:02:03.069 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:03.069 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.069 + for M in /var/spdk/build-*-manifest.txt 00:02:03.069 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:03.069 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.069 ++ uname 00:02:03.069 + [[ Linux == \L\i\n\u\x ]] 00:02:03.069 + sudo dmesg -T 00:02:03.069 + sudo dmesg --clear 00:02:03.069 + dmesg_pid=5161 00:02:03.069 + sudo dmesg -Tw 00:02:03.069 + [[ Fedora Linux == FreeBSD ]] 00:02:03.069 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.069 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.069 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:03.069 + [[ -x /usr/src/fio-static/fio ]] 00:02:03.069 + export FIO_BIN=/usr/src/fio-static/fio 00:02:03.069 + FIO_BIN=/usr/src/fio-static/fio 00:02:03.069 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:03.069 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:03.069 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:03.069 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.070 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.070 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:03.070 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.070 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.070 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.070 Test configuration: 00:02:03.070 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.070 SPDK_TEST_NVMF=1 00:02:03.070 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.070 SPDK_TEST_USDT=1 00:02:03.070 SPDK_TEST_NVMF_MDNS=1 00:02:03.070 SPDK_RUN_UBSAN=1 00:02:03.070 NET_TYPE=virt 00:02:03.070 SPDK_JSONRPC_GO_CLIENT=1 00:02:03.070 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.328 RUN_NIGHTLY=0 12:09:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:03.328 12:09:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.328 12:09:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.328 12:09:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.328 12:09:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.328 12:09:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.328 12:09:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.328 12:09:17 -- paths/export.sh@5 -- $ export PATH 00:02:03.328 12:09:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.328 12:09:17 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:03.328 12:09:17 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:03.328 12:09:17 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721909357.XXXXXX 00:02:03.328 12:09:17 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721909357.gk37p5 00:02:03.328 12:09:17 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:03.328 12:09:17 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:03.328 12:09:17 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:03.328 12:09:17 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:03.329 12:09:17 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.329 12:09:17 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:03.329 12:09:17 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:03.329 12:09:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.329 12:09:17 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:03.329 12:09:17 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:03.329 12:09:17 -- pm/common@17 -- $ local monitor 00:02:03.329 12:09:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.329 12:09:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.329 12:09:18 -- pm/common@25 -- $ sleep 1 00:02:03.329 12:09:18 -- pm/common@21 -- $ date +%s 00:02:03.329 12:09:18 -- pm/common@21 -- $ date +%s 00:02:03.329 12:09:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721909358 00:02:03.329 12:09:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721909358 00:02:03.329 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721909358_collect-vmstat.pm.log 00:02:03.329 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721909358_collect-cpu-load.pm.log 00:02:04.273 12:09:19 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:04.273 12:09:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.273 12:09:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.273 12:09:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:04.273 12:09:19 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.273 Thu Jul 25 12:09:19 PM UTC 2024 00:02:04.273 12:09:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.273 v24.09-pre-327-g5038b8b39 00:02:04.273 12:09:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:04.273 12:09:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:04.273 12:09:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:04.273 12:09:19 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:04.273 12:09:19 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:04.273 12:09:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.273 ************************************ 00:02:04.273 START TEST ubsan 00:02:04.273 ************************************ 00:02:04.273 using ubsan 00:02:04.273 12:09:19 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:04.273 00:02:04.273 real 0m0.000s 00:02:04.273 user 0m0.000s 00:02:04.273 sys 0m0.000s 00:02:04.273 12:09:19 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:04.273 ************************************ 00:02:04.273 END TEST ubsan 00:02:04.273 ************************************ 00:02:04.273 12:09:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:04.273 12:09:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:04.273 12:09:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:04.273 12:09:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:04.273 12:09:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:04.273 12:09:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:04.273 12:09:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:04.273 12:09:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:04.273 12:09:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:04.273 12:09:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:04.532 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:04.532 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.790 Using 'verbs' RDMA provider 00:02:20.693 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:32.933 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:32.933 go version go1.21.1 linux/amd64 00:02:32.933 Creating mk/config.mk...done. 00:02:32.934 Creating mk/cc.flags.mk...done. 00:02:32.934 Type 'make' to build. 00:02:32.934 12:09:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:32.934 12:09:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:32.934 12:09:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:32.934 12:09:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.934 ************************************ 00:02:32.934 START TEST make 00:02:32.934 ************************************ 00:02:32.934 12:09:46 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:32.934 make[1]: Nothing to be done for 'all'. 00:02:45.189 The Meson build system 00:02:45.189 Version: 1.3.1 00:02:45.189 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:45.189 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:45.189 Build type: native build 00:02:45.189 Program cat found: YES (/usr/bin/cat) 00:02:45.189 Project name: DPDK 00:02:45.189 Project version: 24.03.0 00:02:45.189 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:45.189 C linker for the host machine: cc ld.bfd 2.39-16 00:02:45.189 Host machine cpu family: x86_64 00:02:45.189 Host machine cpu: x86_64 00:02:45.189 Message: ## Building in Developer Mode ## 00:02:45.189 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:45.189 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:45.189 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:45.189 Program python3 found: YES (/usr/bin/python3) 00:02:45.189 Program cat found: YES (/usr/bin/cat) 00:02:45.189 Compiler for C supports arguments -march=native: YES 00:02:45.189 Checking for size of "void *" : 8 00:02:45.189 Checking for size of "void *" : 8 (cached) 00:02:45.189 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:45.189 Library m found: YES 00:02:45.189 Library numa found: YES 00:02:45.189 Has header "numaif.h" : YES 00:02:45.189 Library fdt found: NO 00:02:45.189 Library execinfo found: NO 00:02:45.189 Has header "execinfo.h" : YES 00:02:45.189 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:45.189 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:45.189 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:45.189 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:45.189 Run-time dependency openssl found: YES 3.0.9 00:02:45.189 Run-time dependency libpcap found: YES 1.10.4 00:02:45.189 Has header "pcap.h" with dependency libpcap: YES 00:02:45.189 Compiler for C supports arguments -Wcast-qual: YES 00:02:45.189 Compiler for C supports arguments -Wdeprecated: YES 00:02:45.189 Compiler for C supports arguments -Wformat: YES 00:02:45.189 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:45.189 Compiler for C supports arguments -Wformat-security: NO 00:02:45.189 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.189 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:45.189 Compiler for C supports arguments -Wnested-externs: YES 00:02:45.189 Compiler for C supports arguments -Wold-style-definition: YES 00:02:45.189 Compiler for C supports arguments -Wpointer-arith: YES 00:02:45.189 Compiler for C supports arguments -Wsign-compare: YES 00:02:45.189 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:45.189 Compiler for C supports arguments -Wundef: YES 00:02:45.189 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.189 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:45.189 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:45.189 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.189 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:45.189 Program objdump found: YES (/usr/bin/objdump) 00:02:45.189 Compiler for C supports arguments -mavx512f: YES 00:02:45.189 Checking if "AVX512 checking" compiles: YES 00:02:45.189 Fetching value of define "__SSE4_2__" : 1 00:02:45.189 Fetching value of define "__AES__" : 1 00:02:45.189 Fetching value of define "__AVX__" : 1 00:02:45.189 Fetching value of define "__AVX2__" : 1 00:02:45.189 Fetching value of define "__AVX512BW__" : (undefined) 00:02:45.189 Fetching value of define "__AVX512CD__" : (undefined) 00:02:45.189 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:45.189 Fetching value of define "__AVX512F__" : (undefined) 00:02:45.189 Fetching value of define "__AVX512VL__" : (undefined) 00:02:45.189 Fetching value of define "__PCLMUL__" : 1 00:02:45.189 Fetching value of define "__RDRND__" : 1 00:02:45.189 Fetching value of define "__RDSEED__" : 1 00:02:45.189 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:45.189 Fetching value of define "__znver1__" : (undefined) 00:02:45.189 Fetching value of define "__znver2__" : (undefined) 00:02:45.189 Fetching value of define "__znver3__" : (undefined) 00:02:45.189 Fetching value of define "__znver4__" : (undefined) 00:02:45.189 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:45.189 Message: lib/log: Defining dependency "log" 00:02:45.189 Message: lib/kvargs: Defining dependency "kvargs" 00:02:45.189 Message: lib/telemetry: Defining dependency "telemetry" 00:02:45.189 Checking for function "getentropy" : NO 00:02:45.189 Message: lib/eal: Defining dependency "eal" 00:02:45.189 Message: lib/ring: Defining dependency "ring" 00:02:45.189 Message: lib/rcu: Defining dependency "rcu" 00:02:45.189 Message: lib/mempool: Defining dependency "mempool" 00:02:45.189 Message: lib/mbuf: Defining dependency "mbuf" 00:02:45.190 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:45.190 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:45.190 Compiler for C supports arguments -mpclmul: YES 00:02:45.190 Compiler for C supports arguments -maes: YES 00:02:45.190 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.190 Compiler for C supports arguments -mavx512bw: YES 00:02:45.190 Compiler for C supports arguments -mavx512dq: YES 00:02:45.190 Compiler for C supports arguments -mavx512vl: YES 00:02:45.190 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:45.190 Compiler for C supports arguments -mavx2: YES 00:02:45.190 Compiler for C supports arguments -mavx: YES 00:02:45.190 Message: lib/net: Defining dependency "net" 00:02:45.190 Message: lib/meter: Defining dependency "meter" 00:02:45.190 Message: lib/ethdev: Defining dependency "ethdev" 00:02:45.190 Message: lib/pci: Defining dependency "pci" 00:02:45.190 Message: lib/cmdline: Defining dependency "cmdline" 00:02:45.190 Message: lib/hash: Defining dependency "hash" 00:02:45.190 Message: lib/timer: Defining dependency "timer" 00:02:45.190 Message: lib/compressdev: Defining dependency "compressdev" 00:02:45.190 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:45.190 Message: lib/dmadev: Defining dependency "dmadev" 00:02:45.190 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:45.190 Message: lib/power: Defining dependency "power" 00:02:45.190 Message: lib/reorder: Defining dependency "reorder" 00:02:45.190 Message: lib/security: Defining dependency "security" 00:02:45.190 Has header "linux/userfaultfd.h" : YES 00:02:45.190 Has header "linux/vduse.h" : YES 00:02:45.190 Message: lib/vhost: Defining dependency "vhost" 00:02:45.190 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:45.190 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:45.190 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.190 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.190 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:45.190 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:45.190 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:45.190 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:45.190 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:45.190 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:45.190 Program doxygen found: YES (/usr/bin/doxygen) 00:02:45.190 Configuring doxy-api-html.conf using configuration 00:02:45.190 Configuring doxy-api-man.conf using configuration 00:02:45.190 Program mandb found: YES (/usr/bin/mandb) 00:02:45.190 Program sphinx-build found: NO 00:02:45.190 Configuring rte_build_config.h using configuration 00:02:45.190 Message: 00:02:45.190 ================= 00:02:45.190 Applications Enabled 00:02:45.190 ================= 00:02:45.190 00:02:45.190 apps: 00:02:45.190 00:02:45.190 00:02:45.190 Message: 00:02:45.190 ================= 00:02:45.190 Libraries Enabled 00:02:45.190 ================= 00:02:45.190 00:02:45.190 libs: 00:02:45.190 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:45.190 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:45.190 cryptodev, dmadev, power, reorder, security, vhost, 00:02:45.190 00:02:45.190 Message: 00:02:45.190 =============== 00:02:45.190 Drivers Enabled 00:02:45.190 =============== 00:02:45.190 00:02:45.190 common: 00:02:45.190 00:02:45.190 bus: 00:02:45.190 pci, vdev, 00:02:45.190 mempool: 00:02:45.190 ring, 00:02:45.190 dma: 00:02:45.190 00:02:45.190 net: 00:02:45.190 00:02:45.190 crypto: 00:02:45.190 00:02:45.190 compress: 00:02:45.190 00:02:45.190 vdpa: 00:02:45.190 00:02:45.190 00:02:45.190 Message: 00:02:45.190 ================= 00:02:45.190 Content Skipped 00:02:45.190 ================= 00:02:45.190 00:02:45.190 apps: 00:02:45.190 dumpcap: explicitly disabled via build config 00:02:45.190 graph: explicitly disabled via build config 00:02:45.190 pdump: explicitly disabled via build config 00:02:45.190 proc-info: explicitly disabled via build config 00:02:45.190 test-acl: explicitly disabled via build config 00:02:45.190 test-bbdev: explicitly disabled via build config 00:02:45.190 test-cmdline: explicitly disabled via build config 00:02:45.190 test-compress-perf: explicitly disabled via build config 00:02:45.190 test-crypto-perf: explicitly disabled via build config 00:02:45.190 test-dma-perf: explicitly disabled via build config 00:02:45.190 test-eventdev: explicitly disabled via build config 00:02:45.190 test-fib: explicitly disabled via build config 00:02:45.190 test-flow-perf: explicitly disabled via build config 00:02:45.190 test-gpudev: explicitly disabled via build config 00:02:45.190 test-mldev: explicitly disabled via build config 00:02:45.190 test-pipeline: explicitly disabled via build config 00:02:45.190 test-pmd: explicitly disabled via build config 00:02:45.190 test-regex: explicitly disabled via build config 00:02:45.190 test-sad: explicitly disabled via build config 00:02:45.190 test-security-perf: explicitly disabled via build config 00:02:45.190 00:02:45.190 libs: 00:02:45.190 argparse: explicitly disabled via build config 00:02:45.190 metrics: explicitly disabled via build config 00:02:45.190 acl: explicitly disabled via build config 00:02:45.190 bbdev: explicitly disabled via build config 00:02:45.190 bitratestats: explicitly disabled via build config 00:02:45.190 bpf: explicitly disabled via build config 00:02:45.190 cfgfile: explicitly disabled via build config 00:02:45.190 distributor: explicitly disabled via build config 00:02:45.190 efd: explicitly disabled via build config 00:02:45.190 eventdev: explicitly disabled via build config 00:02:45.190 dispatcher: explicitly disabled via build config 00:02:45.190 gpudev: explicitly disabled via build config 00:02:45.190 gro: explicitly disabled via build config 00:02:45.190 gso: explicitly disabled via build config 00:02:45.190 ip_frag: explicitly disabled via build config 00:02:45.190 jobstats: explicitly disabled via build config 00:02:45.190 latencystats: explicitly disabled via build config 00:02:45.190 lpm: explicitly disabled via build config 00:02:45.190 member: explicitly disabled via build config 00:02:45.190 pcapng: explicitly disabled via build config 00:02:45.190 rawdev: explicitly disabled via build config 00:02:45.190 regexdev: explicitly disabled via build config 00:02:45.190 mldev: explicitly disabled via build config 00:02:45.190 rib: explicitly disabled via build config 00:02:45.190 sched: explicitly disabled via build config 00:02:45.190 stack: explicitly disabled via build config 00:02:45.190 ipsec: explicitly disabled via build config 00:02:45.190 pdcp: explicitly disabled via build config 00:02:45.190 fib: explicitly disabled via build config 00:02:45.190 port: explicitly disabled via build config 00:02:45.190 pdump: explicitly disabled via build config 00:02:45.190 table: explicitly disabled via build config 00:02:45.190 pipeline: explicitly disabled via build config 00:02:45.190 graph: explicitly disabled via build config 00:02:45.190 node: explicitly disabled via build config 00:02:45.190 00:02:45.190 drivers: 00:02:45.190 common/cpt: not in enabled drivers build config 00:02:45.190 common/dpaax: not in enabled drivers build config 00:02:45.190 common/iavf: not in enabled drivers build config 00:02:45.190 common/idpf: not in enabled drivers build config 00:02:45.190 common/ionic: not in enabled drivers build config 00:02:45.190 common/mvep: not in enabled drivers build config 00:02:45.190 common/octeontx: not in enabled drivers build config 00:02:45.190 bus/auxiliary: not in enabled drivers build config 00:02:45.190 bus/cdx: not in enabled drivers build config 00:02:45.190 bus/dpaa: not in enabled drivers build config 00:02:45.190 bus/fslmc: not in enabled drivers build config 00:02:45.190 bus/ifpga: not in enabled drivers build config 00:02:45.190 bus/platform: not in enabled drivers build config 00:02:45.190 bus/uacce: not in enabled drivers build config 00:02:45.190 bus/vmbus: not in enabled drivers build config 00:02:45.190 common/cnxk: not in enabled drivers build config 00:02:45.190 common/mlx5: not in enabled drivers build config 00:02:45.190 common/nfp: not in enabled drivers build config 00:02:45.190 common/nitrox: not in enabled drivers build config 00:02:45.190 common/qat: not in enabled drivers build config 00:02:45.190 common/sfc_efx: not in enabled drivers build config 00:02:45.190 mempool/bucket: not in enabled drivers build config 00:02:45.190 mempool/cnxk: not in enabled drivers build config 00:02:45.190 mempool/dpaa: not in enabled drivers build config 00:02:45.190 mempool/dpaa2: not in enabled drivers build config 00:02:45.190 mempool/octeontx: not in enabled drivers build config 00:02:45.191 mempool/stack: not in enabled drivers build config 00:02:45.191 dma/cnxk: not in enabled drivers build config 00:02:45.191 dma/dpaa: not in enabled drivers build config 00:02:45.191 dma/dpaa2: not in enabled drivers build config 00:02:45.191 dma/hisilicon: not in enabled drivers build config 00:02:45.191 dma/idxd: not in enabled drivers build config 00:02:45.191 dma/ioat: not in enabled drivers build config 00:02:45.191 dma/skeleton: not in enabled drivers build config 00:02:45.191 net/af_packet: not in enabled drivers build config 00:02:45.191 net/af_xdp: not in enabled drivers build config 00:02:45.191 net/ark: not in enabled drivers build config 00:02:45.191 net/atlantic: not in enabled drivers build config 00:02:45.191 net/avp: not in enabled drivers build config 00:02:45.191 net/axgbe: not in enabled drivers build config 00:02:45.191 net/bnx2x: not in enabled drivers build config 00:02:45.191 net/bnxt: not in enabled drivers build config 00:02:45.191 net/bonding: not in enabled drivers build config 00:02:45.191 net/cnxk: not in enabled drivers build config 00:02:45.191 net/cpfl: not in enabled drivers build config 00:02:45.191 net/cxgbe: not in enabled drivers build config 00:02:45.191 net/dpaa: not in enabled drivers build config 00:02:45.191 net/dpaa2: not in enabled drivers build config 00:02:45.191 net/e1000: not in enabled drivers build config 00:02:45.191 net/ena: not in enabled drivers build config 00:02:45.191 net/enetc: not in enabled drivers build config 00:02:45.191 net/enetfec: not in enabled drivers build config 00:02:45.191 net/enic: not in enabled drivers build config 00:02:45.191 net/failsafe: not in enabled drivers build config 00:02:45.191 net/fm10k: not in enabled drivers build config 00:02:45.191 net/gve: not in enabled drivers build config 00:02:45.191 net/hinic: not in enabled drivers build config 00:02:45.191 net/hns3: not in enabled drivers build config 00:02:45.191 net/i40e: not in enabled drivers build config 00:02:45.191 net/iavf: not in enabled drivers build config 00:02:45.191 net/ice: not in enabled drivers build config 00:02:45.191 net/idpf: not in enabled drivers build config 00:02:45.191 net/igc: not in enabled drivers build config 00:02:45.191 net/ionic: not in enabled drivers build config 00:02:45.191 net/ipn3ke: not in enabled drivers build config 00:02:45.191 net/ixgbe: not in enabled drivers build config 00:02:45.191 net/mana: not in enabled drivers build config 00:02:45.191 net/memif: not in enabled drivers build config 00:02:45.191 net/mlx4: not in enabled drivers build config 00:02:45.191 net/mlx5: not in enabled drivers build config 00:02:45.191 net/mvneta: not in enabled drivers build config 00:02:45.191 net/mvpp2: not in enabled drivers build config 00:02:45.191 net/netvsc: not in enabled drivers build config 00:02:45.191 net/nfb: not in enabled drivers build config 00:02:45.191 net/nfp: not in enabled drivers build config 00:02:45.191 net/ngbe: not in enabled drivers build config 00:02:45.191 net/null: not in enabled drivers build config 00:02:45.191 net/octeontx: not in enabled drivers build config 00:02:45.191 net/octeon_ep: not in enabled drivers build config 00:02:45.191 net/pcap: not in enabled drivers build config 00:02:45.191 net/pfe: not in enabled drivers build config 00:02:45.191 net/qede: not in enabled drivers build config 00:02:45.191 net/ring: not in enabled drivers build config 00:02:45.191 net/sfc: not in enabled drivers build config 00:02:45.191 net/softnic: not in enabled drivers build config 00:02:45.191 net/tap: not in enabled drivers build config 00:02:45.191 net/thunderx: not in enabled drivers build config 00:02:45.191 net/txgbe: not in enabled drivers build config 00:02:45.191 net/vdev_netvsc: not in enabled drivers build config 00:02:45.191 net/vhost: not in enabled drivers build config 00:02:45.191 net/virtio: not in enabled drivers build config 00:02:45.191 net/vmxnet3: not in enabled drivers build config 00:02:45.191 raw/*: missing internal dependency, "rawdev" 00:02:45.191 crypto/armv8: not in enabled drivers build config 00:02:45.191 crypto/bcmfs: not in enabled drivers build config 00:02:45.191 crypto/caam_jr: not in enabled drivers build config 00:02:45.191 crypto/ccp: not in enabled drivers build config 00:02:45.191 crypto/cnxk: not in enabled drivers build config 00:02:45.191 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.191 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.191 crypto/ipsec_mb: not in enabled drivers build config 00:02:45.191 crypto/mlx5: not in enabled drivers build config 00:02:45.191 crypto/mvsam: not in enabled drivers build config 00:02:45.191 crypto/nitrox: not in enabled drivers build config 00:02:45.191 crypto/null: not in enabled drivers build config 00:02:45.191 crypto/octeontx: not in enabled drivers build config 00:02:45.191 crypto/openssl: not in enabled drivers build config 00:02:45.191 crypto/scheduler: not in enabled drivers build config 00:02:45.191 crypto/uadk: not in enabled drivers build config 00:02:45.191 crypto/virtio: not in enabled drivers build config 00:02:45.191 compress/isal: not in enabled drivers build config 00:02:45.191 compress/mlx5: not in enabled drivers build config 00:02:45.191 compress/nitrox: not in enabled drivers build config 00:02:45.191 compress/octeontx: not in enabled drivers build config 00:02:45.191 compress/zlib: not in enabled drivers build config 00:02:45.191 regex/*: missing internal dependency, "regexdev" 00:02:45.191 ml/*: missing internal dependency, "mldev" 00:02:45.191 vdpa/ifc: not in enabled drivers build config 00:02:45.191 vdpa/mlx5: not in enabled drivers build config 00:02:45.191 vdpa/nfp: not in enabled drivers build config 00:02:45.191 vdpa/sfc: not in enabled drivers build config 00:02:45.191 event/*: missing internal dependency, "eventdev" 00:02:45.191 baseband/*: missing internal dependency, "bbdev" 00:02:45.191 gpu/*: missing internal dependency, "gpudev" 00:02:45.191 00:02:45.191 00:02:45.191 Build targets in project: 85 00:02:45.191 00:02:45.191 DPDK 24.03.0 00:02:45.191 00:02:45.191 User defined options 00:02:45.191 buildtype : debug 00:02:45.191 default_library : shared 00:02:45.191 libdir : lib 00:02:45.191 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:45.191 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:45.191 c_link_args : 00:02:45.191 cpu_instruction_set: native 00:02:45.191 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:45.191 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:45.191 enable_docs : false 00:02:45.191 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:45.191 enable_kmods : false 00:02:45.191 max_lcores : 128 00:02:45.191 tests : false 00:02:45.191 00:02:45.191 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.191 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:45.191 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:45.191 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:45.191 [3/268] Linking static target lib/librte_kvargs.a 00:02:45.191 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:45.191 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:45.191 [6/268] Linking static target lib/librte_log.a 00:02:45.191 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.191 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:45.191 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:45.191 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:45.191 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:45.191 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:45.191 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:45.191 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:45.191 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:45.191 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:45.191 [17/268] Linking static target lib/librte_telemetry.a 00:02:45.191 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:45.449 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.449 [20/268] Linking target lib/librte_log.so.24.1 00:02:45.707 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:45.707 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:45.965 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.965 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.965 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:45.965 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.965 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:45.965 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.223 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.223 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:46.223 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.223 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.223 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:46.223 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:46.482 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:46.482 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:46.740 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:46.740 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:46.997 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:46.997 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:46.997 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:46.997 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:46.997 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:46.997 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.259 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.259 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.259 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.259 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.259 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.517 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.776 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.776 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.776 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:48.035 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:48.035 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:48.035 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:48.293 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:48.294 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:48.294 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.294 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:48.294 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.552 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.552 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:48.810 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.810 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:49.068 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:49.068 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:49.068 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:49.326 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:49.326 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:49.326 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:49.326 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:49.326 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:49.584 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:49.584 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:49.584 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:49.842 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:49.842 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:49.842 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:49.842 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:49.842 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:50.101 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:50.359 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:50.359 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:50.617 [85/268] Linking static target lib/librte_eal.a 00:02:50.617 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:50.617 [87/268] Linking static target lib/librte_ring.a 00:02:50.875 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:50.875 [89/268] Linking static target lib/librte_rcu.a 00:02:50.875 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:50.875 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:50.875 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:51.133 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:51.133 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:51.133 [95/268] Linking static target lib/librte_mempool.a 00:02:51.133 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.133 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:51.392 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.650 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:51.650 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:51.650 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:51.908 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:51.908 [103/268] Linking static target lib/librte_mbuf.a 00:02:51.908 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:52.166 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:52.166 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:52.166 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:52.166 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:52.424 [109/268] Linking static target lib/librte_net.a 00:02:52.424 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:52.424 [111/268] Linking static target lib/librte_meter.a 00:02:52.424 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.683 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:52.683 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:52.683 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:52.683 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.003 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.003 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.003 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:53.261 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:53.520 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:53.778 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:53.778 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:53.778 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:54.036 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:54.036 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:54.036 [127/268] Linking static target lib/librte_pci.a 00:02:54.036 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:54.036 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:54.036 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:54.036 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:54.294 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:54.294 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:54.294 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:54.294 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:54.294 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.294 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:54.552 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:54.552 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:54.552 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:54.552 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:54.552 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:54.552 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:54.552 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:54.552 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:54.552 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:54.552 [147/268] Linking static target lib/librte_ethdev.a 00:02:54.810 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:54.810 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:54.810 [150/268] Linking static target lib/librte_cmdline.a 00:02:55.068 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:55.325 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:55.325 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:55.325 [154/268] Linking static target lib/librte_timer.a 00:02:55.583 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:55.583 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:55.583 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:55.583 [158/268] Linking static target lib/librte_hash.a 00:02:55.583 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:55.841 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:55.841 [161/268] Linking static target lib/librte_compressdev.a 00:02:55.841 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:56.099 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.099 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:56.099 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:56.357 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:56.357 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:56.357 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:56.357 [169/268] Linking static target lib/librte_dmadev.a 00:02:56.615 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.615 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:56.615 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:56.615 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:56.615 [174/268] Linking static target lib/librte_cryptodev.a 00:02:56.893 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.893 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:56.893 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.171 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:57.429 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:57.429 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:57.429 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:57.429 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.429 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:57.429 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:57.687 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:57.687 [186/268] Linking static target lib/librte_power.a 00:02:57.945 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:57.945 [188/268] Linking static target lib/librte_reorder.a 00:02:57.945 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:58.204 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:58.204 [191/268] Linking static target lib/librte_security.a 00:02:58.204 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:58.204 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:58.461 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.461 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:58.718 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.718 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.975 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:58.975 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:58.975 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:58.975 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.233 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:59.491 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:59.491 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:59.491 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:59.491 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:59.491 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:59.749 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.749 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:59.749 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:59.749 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:59.749 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:00.007 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:00.007 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:00.007 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.007 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.007 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.007 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.007 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:00.007 [220/268] Linking static target drivers/librte_bus_vdev.a 00:03:00.007 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:00.007 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:00.265 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.265 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:00.265 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.265 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.265 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:00.523 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.144 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:01.144 [230/268] Linking static target lib/librte_vhost.a 00:03:01.711 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.711 [232/268] Linking target lib/librte_eal.so.24.1 00:03:01.969 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:01.969 [234/268] Linking target lib/librte_ring.so.24.1 00:03:01.969 [235/268] Linking target lib/librte_pci.so.24.1 00:03:01.969 [236/268] Linking target lib/librte_timer.so.24.1 00:03:01.969 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:01.969 [238/268] Linking target lib/librte_meter.so.24.1 00:03:01.969 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:02.228 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:02.228 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:02.228 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:02.228 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:02.228 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:02.228 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:02.228 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:02.228 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:02.228 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:02.486 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:02.486 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:02.486 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:02.487 [252/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.487 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.487 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:02.745 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:02.745 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:02.745 [257/268] Linking target lib/librte_net.so.24.1 00:03:02.745 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:02.745 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:02.745 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:02.745 [261/268] Linking target lib/librte_hash.so.24.1 00:03:02.745 [262/268] Linking target lib/librte_security.so.24.1 00:03:03.003 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:03.003 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:03.003 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:03.003 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:03.003 [267/268] Linking target lib/librte_power.so.24.1 00:03:03.262 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:03.262 INFO: autodetecting backend as ninja 00:03:03.262 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:04.196 CC lib/ut_mock/mock.o 00:03:04.196 CC lib/log/log.o 00:03:04.196 CC lib/log/log_flags.o 00:03:04.196 CC lib/log/log_deprecated.o 00:03:04.196 CC lib/ut/ut.o 00:03:04.455 LIB libspdk_ut.a 00:03:04.455 LIB libspdk_log.a 00:03:04.455 SO libspdk_ut.so.2.0 00:03:04.455 SO libspdk_log.so.7.0 00:03:04.455 SYMLINK libspdk_ut.so 00:03:04.455 LIB libspdk_ut_mock.a 00:03:04.455 SO libspdk_ut_mock.so.6.0 00:03:04.714 SYMLINK libspdk_log.so 00:03:04.714 SYMLINK libspdk_ut_mock.so 00:03:04.714 CC lib/dma/dma.o 00:03:04.714 CC lib/util/base64.o 00:03:04.714 CC lib/util/bit_array.o 00:03:04.714 CC lib/util/cpuset.o 00:03:04.714 CC lib/util/crc16.o 00:03:04.714 CC lib/util/crc32c.o 00:03:04.714 CXX lib/trace_parser/trace.o 00:03:04.714 CC lib/util/crc32.o 00:03:04.714 CC lib/ioat/ioat.o 00:03:04.972 CC lib/vfio_user/host/vfio_user_pci.o 00:03:04.972 CC lib/util/crc32_ieee.o 00:03:04.972 CC lib/util/crc64.o 00:03:04.972 CC lib/vfio_user/host/vfio_user.o 00:03:04.972 CC lib/util/dif.o 00:03:04.972 CC lib/util/fd.o 00:03:04.972 LIB libspdk_dma.a 00:03:04.972 CC lib/util/fd_group.o 00:03:04.972 SO libspdk_dma.so.4.0 00:03:04.972 CC lib/util/file.o 00:03:05.230 SYMLINK libspdk_dma.so 00:03:05.230 CC lib/util/hexlify.o 00:03:05.230 CC lib/util/iov.o 00:03:05.230 CC lib/util/math.o 00:03:05.230 LIB libspdk_ioat.a 00:03:05.230 CC lib/util/net.o 00:03:05.230 SO libspdk_ioat.so.7.0 00:03:05.230 LIB libspdk_vfio_user.a 00:03:05.230 CC lib/util/pipe.o 00:03:05.230 SO libspdk_vfio_user.so.5.0 00:03:05.230 SYMLINK libspdk_ioat.so 00:03:05.230 CC lib/util/strerror_tls.o 00:03:05.230 CC lib/util/string.o 00:03:05.230 CC lib/util/uuid.o 00:03:05.230 CC lib/util/xor.o 00:03:05.230 SYMLINK libspdk_vfio_user.so 00:03:05.230 CC lib/util/zipf.o 00:03:05.488 LIB libspdk_util.a 00:03:05.746 SO libspdk_util.so.10.0 00:03:05.746 SYMLINK libspdk_util.so 00:03:05.746 LIB libspdk_trace_parser.a 00:03:06.004 SO libspdk_trace_parser.so.5.0 00:03:06.004 SYMLINK libspdk_trace_parser.so 00:03:06.004 CC lib/rdma_utils/rdma_utils.o 00:03:06.004 CC lib/env_dpdk/env.o 00:03:06.004 CC lib/conf/conf.o 00:03:06.004 CC lib/env_dpdk/memory.o 00:03:06.004 CC lib/vmd/vmd.o 00:03:06.004 CC lib/idxd/idxd.o 00:03:06.004 CC lib/rdma_provider/common.o 00:03:06.004 CC lib/env_dpdk/pci.o 00:03:06.004 CC lib/vmd/led.o 00:03:06.004 CC lib/json/json_parse.o 00:03:06.262 CC lib/json/json_util.o 00:03:06.262 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:06.262 LIB libspdk_conf.a 00:03:06.262 CC lib/json/json_write.o 00:03:06.262 SO libspdk_conf.so.6.0 00:03:06.262 LIB libspdk_rdma_utils.a 00:03:06.262 SO libspdk_rdma_utils.so.1.0 00:03:06.262 SYMLINK libspdk_conf.so 00:03:06.262 CC lib/idxd/idxd_user.o 00:03:06.262 CC lib/env_dpdk/init.o 00:03:06.521 SYMLINK libspdk_rdma_utils.so 00:03:06.521 CC lib/env_dpdk/threads.o 00:03:06.521 LIB libspdk_rdma_provider.a 00:03:06.521 CC lib/env_dpdk/pci_ioat.o 00:03:06.521 SO libspdk_rdma_provider.so.6.0 00:03:06.521 SYMLINK libspdk_rdma_provider.so 00:03:06.521 CC lib/env_dpdk/pci_virtio.o 00:03:06.521 CC lib/idxd/idxd_kernel.o 00:03:06.521 LIB libspdk_json.a 00:03:06.521 CC lib/env_dpdk/pci_vmd.o 00:03:06.521 CC lib/env_dpdk/pci_idxd.o 00:03:06.521 CC lib/env_dpdk/pci_event.o 00:03:06.521 SO libspdk_json.so.6.0 00:03:06.521 LIB libspdk_vmd.a 00:03:06.781 CC lib/env_dpdk/sigbus_handler.o 00:03:06.781 SO libspdk_vmd.so.6.0 00:03:06.781 SYMLINK libspdk_json.so 00:03:06.781 CC lib/env_dpdk/pci_dpdk.o 00:03:06.781 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:06.781 LIB libspdk_idxd.a 00:03:06.781 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:06.781 SYMLINK libspdk_vmd.so 00:03:06.781 SO libspdk_idxd.so.12.0 00:03:06.781 SYMLINK libspdk_idxd.so 00:03:07.048 CC lib/jsonrpc/jsonrpc_server.o 00:03:07.048 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:07.048 CC lib/jsonrpc/jsonrpc_client.o 00:03:07.048 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:07.309 LIB libspdk_jsonrpc.a 00:03:07.309 SO libspdk_jsonrpc.so.6.0 00:03:07.309 SYMLINK libspdk_jsonrpc.so 00:03:07.309 LIB libspdk_env_dpdk.a 00:03:07.567 SO libspdk_env_dpdk.so.15.0 00:03:07.567 CC lib/rpc/rpc.o 00:03:07.825 SYMLINK libspdk_env_dpdk.so 00:03:07.825 LIB libspdk_rpc.a 00:03:07.825 SO libspdk_rpc.so.6.0 00:03:07.825 SYMLINK libspdk_rpc.so 00:03:08.084 CC lib/notify/notify.o 00:03:08.084 CC lib/notify/notify_rpc.o 00:03:08.084 CC lib/trace/trace.o 00:03:08.084 CC lib/trace/trace_flags.o 00:03:08.084 CC lib/trace/trace_rpc.o 00:03:08.084 CC lib/keyring/keyring.o 00:03:08.084 CC lib/keyring/keyring_rpc.o 00:03:08.343 LIB libspdk_notify.a 00:03:08.343 SO libspdk_notify.so.6.0 00:03:08.343 LIB libspdk_trace.a 00:03:08.608 LIB libspdk_keyring.a 00:03:08.608 SYMLINK libspdk_notify.so 00:03:08.608 SO libspdk_trace.so.10.0 00:03:08.608 SO libspdk_keyring.so.1.0 00:03:08.608 SYMLINK libspdk_trace.so 00:03:08.608 SYMLINK libspdk_keyring.so 00:03:08.866 CC lib/sock/sock.o 00:03:08.866 CC lib/sock/sock_rpc.o 00:03:08.866 CC lib/thread/thread.o 00:03:08.866 CC lib/thread/iobuf.o 00:03:09.432 LIB libspdk_sock.a 00:03:09.432 SO libspdk_sock.so.10.0 00:03:09.432 SYMLINK libspdk_sock.so 00:03:09.690 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:09.690 CC lib/nvme/nvme_ctrlr.o 00:03:09.690 CC lib/nvme/nvme_fabric.o 00:03:09.690 CC lib/nvme/nvme_ns_cmd.o 00:03:09.690 CC lib/nvme/nvme_ns.o 00:03:09.690 CC lib/nvme/nvme_pcie_common.o 00:03:09.690 CC lib/nvme/nvme_pcie.o 00:03:09.690 CC lib/nvme/nvme_qpair.o 00:03:09.690 CC lib/nvme/nvme.o 00:03:10.624 LIB libspdk_thread.a 00:03:10.624 SO libspdk_thread.so.10.1 00:03:10.624 CC lib/nvme/nvme_quirks.o 00:03:10.624 CC lib/nvme/nvme_transport.o 00:03:10.624 SYMLINK libspdk_thread.so 00:03:10.624 CC lib/nvme/nvme_discovery.o 00:03:10.624 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:10.624 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:10.624 CC lib/nvme/nvme_tcp.o 00:03:10.624 CC lib/nvme/nvme_opal.o 00:03:10.884 CC lib/nvme/nvme_io_msg.o 00:03:11.142 CC lib/accel/accel.o 00:03:11.142 CC lib/nvme/nvme_poll_group.o 00:03:11.142 CC lib/accel/accel_rpc.o 00:03:11.401 CC lib/blob/blobstore.o 00:03:11.401 CC lib/init/json_config.o 00:03:11.401 CC lib/virtio/virtio.o 00:03:11.401 CC lib/virtio/virtio_vhost_user.o 00:03:11.401 CC lib/virtio/virtio_vfio_user.o 00:03:11.401 CC lib/accel/accel_sw.o 00:03:11.659 CC lib/init/subsystem.o 00:03:11.659 CC lib/virtio/virtio_pci.o 00:03:11.659 CC lib/blob/request.o 00:03:11.916 CC lib/blob/zeroes.o 00:03:11.916 CC lib/init/subsystem_rpc.o 00:03:11.916 CC lib/init/rpc.o 00:03:11.916 CC lib/blob/blob_bs_dev.o 00:03:11.916 CC lib/nvme/nvme_zns.o 00:03:11.916 LIB libspdk_virtio.a 00:03:11.916 CC lib/nvme/nvme_stubs.o 00:03:11.916 LIB libspdk_init.a 00:03:11.916 CC lib/nvme/nvme_auth.o 00:03:12.200 SO libspdk_virtio.so.7.0 00:03:12.200 SO libspdk_init.so.5.0 00:03:12.200 LIB libspdk_accel.a 00:03:12.200 SYMLINK libspdk_virtio.so 00:03:12.200 CC lib/nvme/nvme_cuse.o 00:03:12.200 CC lib/nvme/nvme_rdma.o 00:03:12.200 SO libspdk_accel.so.16.0 00:03:12.200 SYMLINK libspdk_init.so 00:03:12.200 SYMLINK libspdk_accel.so 00:03:12.458 CC lib/event/reactor.o 00:03:12.458 CC lib/event/app.o 00:03:12.458 CC lib/event/log_rpc.o 00:03:12.458 CC lib/bdev/bdev.o 00:03:12.458 CC lib/event/app_rpc.o 00:03:12.458 CC lib/bdev/bdev_rpc.o 00:03:12.716 CC lib/bdev/bdev_zone.o 00:03:12.716 CC lib/event/scheduler_static.o 00:03:12.716 CC lib/bdev/part.o 00:03:12.716 CC lib/bdev/scsi_nvme.o 00:03:12.974 LIB libspdk_event.a 00:03:12.974 SO libspdk_event.so.14.0 00:03:12.974 SYMLINK libspdk_event.so 00:03:13.541 LIB libspdk_nvme.a 00:03:13.799 SO libspdk_nvme.so.13.1 00:03:14.057 SYMLINK libspdk_nvme.so 00:03:14.315 LIB libspdk_blob.a 00:03:14.315 SO libspdk_blob.so.11.0 00:03:14.315 SYMLINK libspdk_blob.so 00:03:14.574 CC lib/blobfs/blobfs.o 00:03:14.574 CC lib/blobfs/tree.o 00:03:14.574 CC lib/lvol/lvol.o 00:03:15.140 LIB libspdk_bdev.a 00:03:15.140 SO libspdk_bdev.so.16.0 00:03:15.452 SYMLINK libspdk_bdev.so 00:03:15.452 LIB libspdk_blobfs.a 00:03:15.452 CC lib/nvmf/ctrlr.o 00:03:15.452 CC lib/nvmf/ctrlr_discovery.o 00:03:15.452 CC lib/nvmf/ctrlr_bdev.o 00:03:15.452 CC lib/nvmf/subsystem.o 00:03:15.452 CC lib/ublk/ublk.o 00:03:15.452 CC lib/ftl/ftl_core.o 00:03:15.452 CC lib/nbd/nbd.o 00:03:15.452 SO libspdk_blobfs.so.10.0 00:03:15.452 CC lib/scsi/dev.o 00:03:15.710 SYMLINK libspdk_blobfs.so 00:03:15.710 CC lib/ftl/ftl_init.o 00:03:15.710 LIB libspdk_lvol.a 00:03:15.710 SO libspdk_lvol.so.10.0 00:03:15.710 SYMLINK libspdk_lvol.so 00:03:15.710 CC lib/nbd/nbd_rpc.o 00:03:15.710 CC lib/scsi/lun.o 00:03:15.710 CC lib/scsi/port.o 00:03:15.968 CC lib/scsi/scsi.o 00:03:15.968 LIB libspdk_nbd.a 00:03:15.968 CC lib/nvmf/nvmf.o 00:03:15.968 CC lib/ftl/ftl_layout.o 00:03:15.968 SO libspdk_nbd.so.7.0 00:03:15.968 CC lib/scsi/scsi_bdev.o 00:03:15.968 SYMLINK libspdk_nbd.so 00:03:15.968 CC lib/scsi/scsi_pr.o 00:03:16.225 CC lib/scsi/scsi_rpc.o 00:03:16.225 CC lib/scsi/task.o 00:03:16.225 CC lib/ublk/ublk_rpc.o 00:03:16.225 CC lib/ftl/ftl_debug.o 00:03:16.225 CC lib/ftl/ftl_io.o 00:03:16.225 CC lib/ftl/ftl_sb.o 00:03:16.225 LIB libspdk_ublk.a 00:03:16.225 CC lib/ftl/ftl_l2p.o 00:03:16.483 SO libspdk_ublk.so.3.0 00:03:16.483 CC lib/nvmf/nvmf_rpc.o 00:03:16.483 SYMLINK libspdk_ublk.so 00:03:16.483 CC lib/ftl/ftl_l2p_flat.o 00:03:16.483 CC lib/nvmf/transport.o 00:03:16.483 LIB libspdk_scsi.a 00:03:16.483 CC lib/ftl/ftl_nv_cache.o 00:03:16.483 CC lib/ftl/ftl_band.o 00:03:16.483 SO libspdk_scsi.so.9.0 00:03:16.740 CC lib/ftl/ftl_band_ops.o 00:03:16.740 CC lib/ftl/ftl_writer.o 00:03:16.740 SYMLINK libspdk_scsi.so 00:03:16.740 CC lib/ftl/ftl_rq.o 00:03:16.740 CC lib/nvmf/tcp.o 00:03:16.740 CC lib/nvmf/stubs.o 00:03:16.997 CC lib/ftl/ftl_reloc.o 00:03:16.997 CC lib/ftl/ftl_l2p_cache.o 00:03:16.997 CC lib/iscsi/conn.o 00:03:17.254 CC lib/vhost/vhost.o 00:03:17.254 CC lib/nvmf/mdns_server.o 00:03:17.254 CC lib/nvmf/rdma.o 00:03:17.254 CC lib/nvmf/auth.o 00:03:17.254 CC lib/vhost/vhost_rpc.o 00:03:17.254 CC lib/ftl/ftl_p2l.o 00:03:17.517 CC lib/ftl/mngt/ftl_mngt.o 00:03:17.517 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:17.517 CC lib/vhost/vhost_scsi.o 00:03:17.773 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:17.773 CC lib/iscsi/init_grp.o 00:03:17.773 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:17.773 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:17.773 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:17.773 CC lib/vhost/vhost_blk.o 00:03:18.030 CC lib/iscsi/iscsi.o 00:03:18.030 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.030 CC lib/vhost/rte_vhost_user.o 00:03:18.030 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.030 CC lib/iscsi/md5.o 00:03:18.030 CC lib/iscsi/param.o 00:03:18.286 CC lib/iscsi/portal_grp.o 00:03:18.286 CC lib/iscsi/tgt_node.o 00:03:18.286 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:18.286 CC lib/iscsi/iscsi_subsystem.o 00:03:18.543 CC lib/iscsi/iscsi_rpc.o 00:03:18.543 CC lib/iscsi/task.o 00:03:18.543 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:18.543 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:18.802 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:18.802 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:18.802 CC lib/ftl/utils/ftl_conf.o 00:03:18.802 CC lib/ftl/utils/ftl_md.o 00:03:18.802 CC lib/ftl/utils/ftl_mempool.o 00:03:18.802 CC lib/ftl/utils/ftl_bitmap.o 00:03:18.802 CC lib/ftl/utils/ftl_property.o 00:03:18.802 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:19.059 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:19.059 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:19.059 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:19.059 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:19.059 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:19.316 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:19.316 LIB libspdk_nvmf.a 00:03:19.316 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:19.316 LIB libspdk_vhost.a 00:03:19.316 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:19.316 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:19.316 LIB libspdk_iscsi.a 00:03:19.316 SO libspdk_vhost.so.8.0 00:03:19.316 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:19.316 CC lib/ftl/base/ftl_base_dev.o 00:03:19.316 SO libspdk_nvmf.so.19.0 00:03:19.316 SO libspdk_iscsi.so.8.0 00:03:19.316 CC lib/ftl/base/ftl_base_bdev.o 00:03:19.316 SYMLINK libspdk_vhost.so 00:03:19.316 CC lib/ftl/ftl_trace.o 00:03:19.574 SYMLINK libspdk_iscsi.so 00:03:19.574 SYMLINK libspdk_nvmf.so 00:03:19.574 LIB libspdk_ftl.a 00:03:19.831 SO libspdk_ftl.so.9.0 00:03:20.397 SYMLINK libspdk_ftl.so 00:03:20.655 CC module/env_dpdk/env_dpdk_rpc.o 00:03:20.655 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:20.655 CC module/blob/bdev/blob_bdev.o 00:03:20.655 CC module/sock/posix/posix.o 00:03:20.655 CC module/keyring/linux/keyring.o 00:03:20.655 CC module/scheduler/gscheduler/gscheduler.o 00:03:20.655 CC module/keyring/file/keyring.o 00:03:20.655 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:20.655 CC module/accel/error/accel_error.o 00:03:20.655 CC module/accel/ioat/accel_ioat.o 00:03:20.655 LIB libspdk_env_dpdk_rpc.a 00:03:20.913 SO libspdk_env_dpdk_rpc.so.6.0 00:03:20.913 CC module/keyring/file/keyring_rpc.o 00:03:20.913 SYMLINK libspdk_env_dpdk_rpc.so 00:03:20.913 LIB libspdk_scheduler_gscheduler.a 00:03:20.913 LIB libspdk_scheduler_dpdk_governor.a 00:03:20.913 CC module/keyring/linux/keyring_rpc.o 00:03:20.913 SO libspdk_scheduler_gscheduler.so.4.0 00:03:20.913 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:20.913 CC module/accel/error/accel_error_rpc.o 00:03:20.913 CC module/accel/ioat/accel_ioat_rpc.o 00:03:20.913 LIB libspdk_scheduler_dynamic.a 00:03:20.913 SYMLINK libspdk_scheduler_gscheduler.so 00:03:20.913 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:20.913 SO libspdk_scheduler_dynamic.so.4.0 00:03:20.913 LIB libspdk_blob_bdev.a 00:03:20.913 LIB libspdk_keyring_file.a 00:03:20.913 SO libspdk_blob_bdev.so.11.0 00:03:20.913 LIB libspdk_keyring_linux.a 00:03:20.913 SYMLINK libspdk_scheduler_dynamic.so 00:03:21.170 SO libspdk_keyring_file.so.1.0 00:03:21.170 SO libspdk_keyring_linux.so.1.0 00:03:21.170 CC module/accel/dsa/accel_dsa.o 00:03:21.170 CC module/accel/dsa/accel_dsa_rpc.o 00:03:21.170 SYMLINK libspdk_blob_bdev.so 00:03:21.170 LIB libspdk_accel_error.a 00:03:21.170 LIB libspdk_accel_ioat.a 00:03:21.170 SYMLINK libspdk_keyring_linux.so 00:03:21.170 SYMLINK libspdk_keyring_file.so 00:03:21.170 SO libspdk_accel_error.so.2.0 00:03:21.170 SO libspdk_accel_ioat.so.6.0 00:03:21.170 CC module/accel/iaa/accel_iaa.o 00:03:21.170 CC module/accel/iaa/accel_iaa_rpc.o 00:03:21.170 SYMLINK libspdk_accel_error.so 00:03:21.170 SYMLINK libspdk_accel_ioat.so 00:03:21.428 LIB libspdk_accel_dsa.a 00:03:21.428 CC module/blobfs/bdev/blobfs_bdev.o 00:03:21.428 CC module/bdev/error/vbdev_error.o 00:03:21.428 CC module/bdev/delay/vbdev_delay.o 00:03:21.428 CC module/bdev/gpt/gpt.o 00:03:21.428 CC module/bdev/lvol/vbdev_lvol.o 00:03:21.428 SO libspdk_accel_dsa.so.5.0 00:03:21.428 LIB libspdk_accel_iaa.a 00:03:21.428 CC module/bdev/malloc/bdev_malloc.o 00:03:21.428 LIB libspdk_sock_posix.a 00:03:21.428 SO libspdk_accel_iaa.so.3.0 00:03:21.428 SYMLINK libspdk_accel_dsa.so 00:03:21.428 CC module/bdev/error/vbdev_error_rpc.o 00:03:21.428 SO libspdk_sock_posix.so.6.0 00:03:21.428 CC module/bdev/null/bdev_null.o 00:03:21.428 SYMLINK libspdk_accel_iaa.so 00:03:21.428 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:21.709 CC module/bdev/gpt/vbdev_gpt.o 00:03:21.709 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:21.709 SYMLINK libspdk_sock_posix.so 00:03:21.709 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:21.709 LIB libspdk_bdev_error.a 00:03:21.709 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:21.709 SO libspdk_bdev_error.so.6.0 00:03:21.709 LIB libspdk_blobfs_bdev.a 00:03:21.709 CC module/bdev/null/bdev_null_rpc.o 00:03:21.709 SO libspdk_blobfs_bdev.so.6.0 00:03:21.709 LIB libspdk_bdev_malloc.a 00:03:21.709 SYMLINK libspdk_bdev_error.so 00:03:21.967 SO libspdk_bdev_malloc.so.6.0 00:03:21.967 LIB libspdk_bdev_delay.a 00:03:21.967 LIB libspdk_bdev_gpt.a 00:03:21.967 CC module/bdev/nvme/bdev_nvme.o 00:03:21.967 SYMLINK libspdk_blobfs_bdev.so 00:03:21.967 SO libspdk_bdev_delay.so.6.0 00:03:21.967 SO libspdk_bdev_gpt.so.6.0 00:03:21.967 SYMLINK libspdk_bdev_malloc.so 00:03:21.967 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:21.967 SYMLINK libspdk_bdev_delay.so 00:03:21.967 LIB libspdk_bdev_null.a 00:03:21.967 SYMLINK libspdk_bdev_gpt.so 00:03:21.967 CC module/bdev/passthru/vbdev_passthru.o 00:03:21.967 CC module/bdev/raid/bdev_raid.o 00:03:21.967 SO libspdk_bdev_null.so.6.0 00:03:21.967 CC module/bdev/split/vbdev_split.o 00:03:21.967 SYMLINK libspdk_bdev_null.so 00:03:21.967 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:21.967 CC module/bdev/split/vbdev_split_rpc.o 00:03:21.967 LIB libspdk_bdev_lvol.a 00:03:22.224 CC module/bdev/aio/bdev_aio.o 00:03:22.224 CC module/bdev/ftl/bdev_ftl.o 00:03:22.224 SO libspdk_bdev_lvol.so.6.0 00:03:22.224 SYMLINK libspdk_bdev_lvol.so 00:03:22.224 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.224 CC module/bdev/raid/bdev_raid_rpc.o 00:03:22.224 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:22.224 LIB libspdk_bdev_split.a 00:03:22.224 SO libspdk_bdev_split.so.6.0 00:03:22.482 SYMLINK libspdk_bdev_split.so 00:03:22.482 CC module/bdev/raid/bdev_raid_sb.o 00:03:22.482 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:22.482 CC module/bdev/raid/raid0.o 00:03:22.482 LIB libspdk_bdev_ftl.a 00:03:22.482 CC module/bdev/aio/bdev_aio_rpc.o 00:03:22.482 LIB libspdk_bdev_passthru.a 00:03:22.482 CC module/bdev/raid/raid1.o 00:03:22.482 SO libspdk_bdev_ftl.so.6.0 00:03:22.482 SO libspdk_bdev_passthru.so.6.0 00:03:22.482 SYMLINK libspdk_bdev_ftl.so 00:03:22.482 SYMLINK libspdk_bdev_passthru.so 00:03:22.482 CC module/bdev/raid/concat.o 00:03:22.482 LIB libspdk_bdev_zone_block.a 00:03:22.482 LIB libspdk_bdev_aio.a 00:03:22.741 SO libspdk_bdev_zone_block.so.6.0 00:03:22.741 SO libspdk_bdev_aio.so.6.0 00:03:22.741 CC module/bdev/nvme/nvme_rpc.o 00:03:22.741 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.741 SYMLINK libspdk_bdev_zone_block.so 00:03:22.741 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.741 CC module/bdev/nvme/bdev_mdns_client.o 00:03:22.741 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.741 SYMLINK libspdk_bdev_aio.so 00:03:22.741 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.741 CC module/bdev/nvme/vbdev_opal.o 00:03:22.741 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:22.741 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:22.999 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.999 LIB libspdk_bdev_raid.a 00:03:22.999 SO libspdk_bdev_raid.so.6.0 00:03:22.999 LIB libspdk_bdev_iscsi.a 00:03:22.999 SO libspdk_bdev_iscsi.so.6.0 00:03:23.257 SYMLINK libspdk_bdev_raid.so 00:03:23.257 SYMLINK libspdk_bdev_iscsi.so 00:03:23.257 LIB libspdk_bdev_virtio.a 00:03:23.257 SO libspdk_bdev_virtio.so.6.0 00:03:23.257 SYMLINK libspdk_bdev_virtio.so 00:03:24.192 LIB libspdk_bdev_nvme.a 00:03:24.192 SO libspdk_bdev_nvme.so.7.0 00:03:24.192 SYMLINK libspdk_bdev_nvme.so 00:03:24.758 CC module/event/subsystems/iobuf/iobuf.o 00:03:24.758 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:24.758 CC module/event/subsystems/sock/sock.o 00:03:24.758 CC module/event/subsystems/vmd/vmd.o 00:03:24.758 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:24.758 CC module/event/subsystems/scheduler/scheduler.o 00:03:24.758 CC module/event/subsystems/keyring/keyring.o 00:03:24.758 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.758 LIB libspdk_event_scheduler.a 00:03:25.015 LIB libspdk_event_keyring.a 00:03:25.015 SO libspdk_event_scheduler.so.4.0 00:03:25.015 LIB libspdk_event_sock.a 00:03:25.015 LIB libspdk_event_vhost_blk.a 00:03:25.015 LIB libspdk_event_iobuf.a 00:03:25.015 SO libspdk_event_keyring.so.1.0 00:03:25.015 SO libspdk_event_sock.so.5.0 00:03:25.015 SO libspdk_event_vhost_blk.so.3.0 00:03:25.015 SO libspdk_event_iobuf.so.3.0 00:03:25.015 LIB libspdk_event_vmd.a 00:03:25.015 SYMLINK libspdk_event_scheduler.so 00:03:25.015 SYMLINK libspdk_event_keyring.so 00:03:25.015 SYMLINK libspdk_event_vhost_blk.so 00:03:25.015 SO libspdk_event_vmd.so.6.0 00:03:25.015 SYMLINK libspdk_event_sock.so 00:03:25.015 SYMLINK libspdk_event_iobuf.so 00:03:25.015 SYMLINK libspdk_event_vmd.so 00:03:25.273 CC module/event/subsystems/accel/accel.o 00:03:25.531 LIB libspdk_event_accel.a 00:03:25.531 SO libspdk_event_accel.so.6.0 00:03:25.531 SYMLINK libspdk_event_accel.so 00:03:25.789 CC module/event/subsystems/bdev/bdev.o 00:03:26.048 LIB libspdk_event_bdev.a 00:03:26.048 SO libspdk_event_bdev.so.6.0 00:03:26.048 SYMLINK libspdk_event_bdev.so 00:03:26.306 CC module/event/subsystems/scsi/scsi.o 00:03:26.306 CC module/event/subsystems/ublk/ublk.o 00:03:26.306 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:26.306 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:26.306 CC module/event/subsystems/nbd/nbd.o 00:03:26.564 LIB libspdk_event_ublk.a 00:03:26.564 LIB libspdk_event_nbd.a 00:03:26.564 LIB libspdk_event_scsi.a 00:03:26.564 SO libspdk_event_ublk.so.3.0 00:03:26.564 SO libspdk_event_nbd.so.6.0 00:03:26.564 SO libspdk_event_scsi.so.6.0 00:03:26.564 SYMLINK libspdk_event_nbd.so 00:03:26.564 SYMLINK libspdk_event_ublk.so 00:03:26.564 LIB libspdk_event_nvmf.a 00:03:26.564 SYMLINK libspdk_event_scsi.so 00:03:26.564 SO libspdk_event_nvmf.so.6.0 00:03:26.822 SYMLINK libspdk_event_nvmf.so 00:03:26.822 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:26.822 CC module/event/subsystems/iscsi/iscsi.o 00:03:27.081 LIB libspdk_event_vhost_scsi.a 00:03:27.081 SO libspdk_event_vhost_scsi.so.3.0 00:03:27.081 LIB libspdk_event_iscsi.a 00:03:27.081 SO libspdk_event_iscsi.so.6.0 00:03:27.081 SYMLINK libspdk_event_vhost_scsi.so 00:03:27.081 SYMLINK libspdk_event_iscsi.so 00:03:27.338 SO libspdk.so.6.0 00:03:27.338 SYMLINK libspdk.so 00:03:27.596 CXX app/trace/trace.o 00:03:27.596 CC app/trace_record/trace_record.o 00:03:27.596 CC app/nvmf_tgt/nvmf_main.o 00:03:27.596 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:27.596 CC app/iscsi_tgt/iscsi_tgt.o 00:03:27.596 CC examples/ioat/perf/perf.o 00:03:27.596 CC app/spdk_tgt/spdk_tgt.o 00:03:27.596 CC examples/util/zipf/zipf.o 00:03:27.854 CC test/thread/poller_perf/poller_perf.o 00:03:27.854 LINK nvmf_tgt 00:03:27.854 LINK interrupt_tgt 00:03:27.854 LINK spdk_trace_record 00:03:27.854 LINK zipf 00:03:27.854 LINK iscsi_tgt 00:03:27.854 LINK spdk_tgt 00:03:27.854 LINK poller_perf 00:03:27.854 LINK ioat_perf 00:03:28.113 LINK spdk_trace 00:03:28.113 CC app/spdk_nvme_perf/perf.o 00:03:28.113 CC app/spdk_lspci/spdk_lspci.o 00:03:28.113 CC examples/ioat/verify/verify.o 00:03:28.113 CC app/spdk_nvme_identify/identify.o 00:03:28.372 TEST_HEADER include/spdk/accel.h 00:03:28.372 TEST_HEADER include/spdk/accel_module.h 00:03:28.372 TEST_HEADER include/spdk/assert.h 00:03:28.372 TEST_HEADER include/spdk/barrier.h 00:03:28.372 TEST_HEADER include/spdk/base64.h 00:03:28.372 TEST_HEADER include/spdk/bdev.h 00:03:28.372 TEST_HEADER include/spdk/bdev_module.h 00:03:28.372 TEST_HEADER include/spdk/bdev_zone.h 00:03:28.372 TEST_HEADER include/spdk/bit_array.h 00:03:28.372 TEST_HEADER include/spdk/bit_pool.h 00:03:28.372 CC app/spdk_nvme_discover/discovery_aer.o 00:03:28.372 TEST_HEADER include/spdk/blob_bdev.h 00:03:28.372 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:28.372 TEST_HEADER include/spdk/blobfs.h 00:03:28.372 TEST_HEADER include/spdk/blob.h 00:03:28.372 TEST_HEADER include/spdk/conf.h 00:03:28.372 TEST_HEADER include/spdk/config.h 00:03:28.372 TEST_HEADER include/spdk/cpuset.h 00:03:28.372 TEST_HEADER include/spdk/crc16.h 00:03:28.372 TEST_HEADER include/spdk/crc32.h 00:03:28.372 TEST_HEADER include/spdk/crc64.h 00:03:28.372 TEST_HEADER include/spdk/dif.h 00:03:28.372 TEST_HEADER include/spdk/dma.h 00:03:28.372 TEST_HEADER include/spdk/endian.h 00:03:28.372 TEST_HEADER include/spdk/env_dpdk.h 00:03:28.372 TEST_HEADER include/spdk/env.h 00:03:28.372 TEST_HEADER include/spdk/event.h 00:03:28.372 TEST_HEADER include/spdk/fd_group.h 00:03:28.372 TEST_HEADER include/spdk/fd.h 00:03:28.372 TEST_HEADER include/spdk/file.h 00:03:28.372 TEST_HEADER include/spdk/ftl.h 00:03:28.372 CC test/dma/test_dma/test_dma.o 00:03:28.372 TEST_HEADER include/spdk/gpt_spec.h 00:03:28.372 TEST_HEADER include/spdk/hexlify.h 00:03:28.372 TEST_HEADER include/spdk/histogram_data.h 00:03:28.372 TEST_HEADER include/spdk/idxd.h 00:03:28.372 TEST_HEADER include/spdk/idxd_spec.h 00:03:28.372 TEST_HEADER include/spdk/init.h 00:03:28.372 TEST_HEADER include/spdk/ioat.h 00:03:28.372 TEST_HEADER include/spdk/ioat_spec.h 00:03:28.372 TEST_HEADER include/spdk/iscsi_spec.h 00:03:28.372 LINK spdk_lspci 00:03:28.372 TEST_HEADER include/spdk/json.h 00:03:28.372 TEST_HEADER include/spdk/jsonrpc.h 00:03:28.372 TEST_HEADER include/spdk/keyring.h 00:03:28.372 TEST_HEADER include/spdk/keyring_module.h 00:03:28.372 TEST_HEADER include/spdk/likely.h 00:03:28.372 CC test/app/bdev_svc/bdev_svc.o 00:03:28.372 TEST_HEADER include/spdk/log.h 00:03:28.372 TEST_HEADER include/spdk/lvol.h 00:03:28.372 TEST_HEADER include/spdk/memory.h 00:03:28.372 TEST_HEADER include/spdk/mmio.h 00:03:28.372 TEST_HEADER include/spdk/nbd.h 00:03:28.372 TEST_HEADER include/spdk/net.h 00:03:28.372 TEST_HEADER include/spdk/notify.h 00:03:28.372 TEST_HEADER include/spdk/nvme.h 00:03:28.372 TEST_HEADER include/spdk/nvme_intel.h 00:03:28.372 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:28.372 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:28.372 TEST_HEADER include/spdk/nvme_spec.h 00:03:28.372 TEST_HEADER include/spdk/nvme_zns.h 00:03:28.372 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:28.372 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:28.372 TEST_HEADER include/spdk/nvmf.h 00:03:28.372 TEST_HEADER include/spdk/nvmf_spec.h 00:03:28.372 TEST_HEADER include/spdk/nvmf_transport.h 00:03:28.372 TEST_HEADER include/spdk/opal.h 00:03:28.372 TEST_HEADER include/spdk/opal_spec.h 00:03:28.372 TEST_HEADER include/spdk/pci_ids.h 00:03:28.372 TEST_HEADER include/spdk/pipe.h 00:03:28.372 TEST_HEADER include/spdk/queue.h 00:03:28.372 TEST_HEADER include/spdk/reduce.h 00:03:28.372 TEST_HEADER include/spdk/rpc.h 00:03:28.372 TEST_HEADER include/spdk/scheduler.h 00:03:28.372 TEST_HEADER include/spdk/scsi.h 00:03:28.372 TEST_HEADER include/spdk/scsi_spec.h 00:03:28.372 TEST_HEADER include/spdk/sock.h 00:03:28.372 TEST_HEADER include/spdk/stdinc.h 00:03:28.372 TEST_HEADER include/spdk/string.h 00:03:28.372 TEST_HEADER include/spdk/thread.h 00:03:28.372 TEST_HEADER include/spdk/trace.h 00:03:28.372 TEST_HEADER include/spdk/trace_parser.h 00:03:28.372 TEST_HEADER include/spdk/tree.h 00:03:28.372 TEST_HEADER include/spdk/ublk.h 00:03:28.372 TEST_HEADER include/spdk/util.h 00:03:28.372 TEST_HEADER include/spdk/uuid.h 00:03:28.372 TEST_HEADER include/spdk/version.h 00:03:28.372 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:28.372 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:28.372 TEST_HEADER include/spdk/vhost.h 00:03:28.372 TEST_HEADER include/spdk/vmd.h 00:03:28.372 TEST_HEADER include/spdk/xor.h 00:03:28.372 TEST_HEADER include/spdk/zipf.h 00:03:28.372 CXX test/cpp_headers/accel.o 00:03:28.372 LINK verify 00:03:28.372 LINK spdk_nvme_discover 00:03:28.630 CC test/env/mem_callbacks/mem_callbacks.o 00:03:28.630 LINK bdev_svc 00:03:28.630 CC test/env/vtophys/vtophys.o 00:03:28.630 CXX test/cpp_headers/accel_module.o 00:03:28.630 CXX test/cpp_headers/assert.o 00:03:28.630 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:28.630 LINK test_dma 00:03:28.889 LINK vtophys 00:03:28.889 CXX test/cpp_headers/barrier.o 00:03:28.889 LINK env_dpdk_post_init 00:03:29.147 LINK spdk_nvme_perf 00:03:29.147 CC test/app/histogram_perf/histogram_perf.o 00:03:29.147 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:29.147 LINK spdk_nvme_identify 00:03:29.147 CXX test/cpp_headers/base64.o 00:03:29.147 LINK mem_callbacks 00:03:29.147 LINK histogram_perf 00:03:29.405 CC examples/thread/thread/thread_ex.o 00:03:29.405 CC examples/sock/hello_world/hello_sock.o 00:03:29.405 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:29.405 CXX test/cpp_headers/bdev.o 00:03:29.405 CC examples/vmd/lsvmd/lsvmd.o 00:03:29.405 CC test/env/memory/memory_ut.o 00:03:29.405 CC app/spdk_top/spdk_top.o 00:03:29.405 CC test/env/pci/pci_ut.o 00:03:29.663 LINK nvme_fuzz 00:03:29.663 LINK lsvmd 00:03:29.663 LINK hello_sock 00:03:29.663 LINK thread 00:03:29.663 CXX test/cpp_headers/bdev_module.o 00:03:29.663 CXX test/cpp_headers/bdev_zone.o 00:03:29.663 CXX test/cpp_headers/bit_array.o 00:03:29.921 CXX test/cpp_headers/bit_pool.o 00:03:29.921 CC examples/vmd/led/led.o 00:03:29.921 CXX test/cpp_headers/blob_bdev.o 00:03:29.921 LINK pci_ut 00:03:29.921 CXX test/cpp_headers/blobfs_bdev.o 00:03:29.921 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:29.921 LINK led 00:03:30.184 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:30.184 CXX test/cpp_headers/blobfs.o 00:03:30.184 CC app/vhost/vhost.o 00:03:30.453 LINK spdk_top 00:03:30.453 CXX test/cpp_headers/blob.o 00:03:30.453 CC examples/idxd/perf/perf.o 00:03:30.453 LINK vhost 00:03:30.453 CC test/event/event_perf/event_perf.o 00:03:30.453 LINK vhost_fuzz 00:03:30.453 CC examples/accel/perf/accel_perf.o 00:03:30.711 CXX test/cpp_headers/conf.o 00:03:30.711 LINK event_perf 00:03:30.711 LINK memory_ut 00:03:30.711 CC test/nvme/aer/aer.o 00:03:30.711 LINK idxd_perf 00:03:30.968 CC test/nvme/reset/reset.o 00:03:30.968 CXX test/cpp_headers/config.o 00:03:30.968 CXX test/cpp_headers/cpuset.o 00:03:30.968 CC app/spdk_dd/spdk_dd.o 00:03:30.968 CC test/event/reactor/reactor.o 00:03:30.968 CC test/event/reactor_perf/reactor_perf.o 00:03:30.968 LINK accel_perf 00:03:31.225 CXX test/cpp_headers/crc16.o 00:03:31.225 LINK aer 00:03:31.225 LINK reset 00:03:31.225 LINK reactor 00:03:31.225 CC test/event/app_repeat/app_repeat.o 00:03:31.225 LINK reactor_perf 00:03:31.225 CXX test/cpp_headers/crc32.o 00:03:31.225 CXX test/cpp_headers/crc64.o 00:03:31.225 LINK app_repeat 00:03:31.225 CXX test/cpp_headers/dif.o 00:03:31.225 LINK spdk_dd 00:03:31.483 LINK iscsi_fuzz 00:03:31.483 CXX test/cpp_headers/dma.o 00:03:31.483 CC test/event/scheduler/scheduler.o 00:03:31.483 CC test/nvme/sgl/sgl.o 00:03:31.483 CXX test/cpp_headers/endian.o 00:03:31.483 CXX test/cpp_headers/env_dpdk.o 00:03:31.740 CXX test/cpp_headers/env.o 00:03:31.740 LINK scheduler 00:03:31.997 CC examples/blob/hello_world/hello_blob.o 00:03:31.997 LINK sgl 00:03:31.997 CC test/app/jsoncat/jsoncat.o 00:03:31.997 CC examples/nvme/hello_world/hello_world.o 00:03:31.997 CC examples/nvme/reconnect/reconnect.o 00:03:31.997 CXX test/cpp_headers/event.o 00:03:31.997 CC app/fio/nvme/fio_plugin.o 00:03:31.997 CC test/nvme/e2edp/nvme_dp.o 00:03:31.997 CXX test/cpp_headers/fd_group.o 00:03:32.254 LINK jsoncat 00:03:32.254 CXX test/cpp_headers/fd.o 00:03:32.254 LINK hello_world 00:03:32.254 LINK hello_blob 00:03:32.512 LINK reconnect 00:03:32.512 CXX test/cpp_headers/file.o 00:03:32.512 LINK nvme_dp 00:03:32.512 CXX test/cpp_headers/ftl.o 00:03:32.512 CC test/app/stub/stub.o 00:03:32.512 CC examples/bdev/hello_world/hello_bdev.o 00:03:32.512 CC app/fio/bdev/fio_plugin.o 00:03:32.770 LINK spdk_nvme 00:03:32.770 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:32.770 LINK stub 00:03:32.770 CXX test/cpp_headers/gpt_spec.o 00:03:32.770 CC examples/blob/cli/blobcli.o 00:03:32.770 CC test/nvme/overhead/overhead.o 00:03:33.036 LINK hello_bdev 00:03:33.036 CC examples/bdev/bdevperf/bdevperf.o 00:03:33.036 CC examples/nvme/arbitration/arbitration.o 00:03:33.036 CXX test/cpp_headers/hexlify.o 00:03:33.325 CC test/rpc_client/rpc_client_test.o 00:03:33.325 LINK spdk_bdev 00:03:33.325 CXX test/cpp_headers/histogram_data.o 00:03:33.325 LINK overhead 00:03:33.325 LINK nvme_manage 00:03:33.325 LINK blobcli 00:03:33.325 LINK rpc_client_test 00:03:33.325 CXX test/cpp_headers/idxd.o 00:03:33.325 CC examples/nvme/hotplug/hotplug.o 00:03:33.583 CC test/accel/dif/dif.o 00:03:33.583 LINK arbitration 00:03:33.583 CC test/nvme/err_injection/err_injection.o 00:03:33.583 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:33.583 CXX test/cpp_headers/idxd_spec.o 00:03:33.583 LINK hotplug 00:03:34.146 CC examples/nvme/abort/abort.o 00:03:34.146 LINK bdevperf 00:03:34.146 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:34.146 LINK err_injection 00:03:34.146 CXX test/cpp_headers/init.o 00:03:34.146 LINK cmb_copy 00:03:34.146 CXX test/cpp_headers/ioat.o 00:03:34.146 CC test/blobfs/mkfs/mkfs.o 00:03:34.146 LINK pmr_persistence 00:03:34.146 LINK dif 00:03:34.146 CC test/nvme/startup/startup.o 00:03:34.146 CC test/nvme/reserve/reserve.o 00:03:34.146 CXX test/cpp_headers/ioat_spec.o 00:03:34.146 CC test/nvme/simple_copy/simple_copy.o 00:03:34.146 CC test/nvme/connect_stress/connect_stress.o 00:03:34.146 CXX test/cpp_headers/iscsi_spec.o 00:03:34.146 LINK abort 00:03:34.146 LINK mkfs 00:03:34.146 LINK startup 00:03:34.402 CXX test/cpp_headers/json.o 00:03:34.402 LINK reserve 00:03:34.402 LINK connect_stress 00:03:34.402 CC test/nvme/boot_partition/boot_partition.o 00:03:34.402 LINK simple_copy 00:03:34.402 CC test/nvme/compliance/nvme_compliance.o 00:03:34.402 CXX test/cpp_headers/jsonrpc.o 00:03:34.659 CC test/nvme/fused_ordering/fused_ordering.o 00:03:34.659 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:34.659 LINK boot_partition 00:03:34.659 CC examples/nvmf/nvmf/nvmf.o 00:03:34.659 CC test/lvol/esnap/esnap.o 00:03:34.659 CC test/nvme/fdp/fdp.o 00:03:34.659 CC test/nvme/cuse/cuse.o 00:03:34.659 CXX test/cpp_headers/keyring.o 00:03:34.915 LINK doorbell_aers 00:03:34.915 LINK fused_ordering 00:03:34.915 LINK nvme_compliance 00:03:34.915 CXX test/cpp_headers/keyring_module.o 00:03:34.915 LINK nvmf 00:03:34.915 CXX test/cpp_headers/likely.o 00:03:34.915 CXX test/cpp_headers/log.o 00:03:34.915 LINK fdp 00:03:34.915 CXX test/cpp_headers/lvol.o 00:03:34.915 CXX test/cpp_headers/memory.o 00:03:34.915 CC test/bdev/bdevio/bdevio.o 00:03:35.173 CXX test/cpp_headers/mmio.o 00:03:35.173 CXX test/cpp_headers/nbd.o 00:03:35.173 CXX test/cpp_headers/net.o 00:03:35.173 CXX test/cpp_headers/notify.o 00:03:35.173 CXX test/cpp_headers/nvme.o 00:03:35.173 CXX test/cpp_headers/nvme_intel.o 00:03:35.173 CXX test/cpp_headers/nvme_ocssd.o 00:03:35.173 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:35.430 CXX test/cpp_headers/nvme_spec.o 00:03:35.430 CXX test/cpp_headers/nvme_zns.o 00:03:35.430 CXX test/cpp_headers/nvmf_cmd.o 00:03:35.430 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:35.430 CXX test/cpp_headers/nvmf.o 00:03:35.430 CXX test/cpp_headers/nvmf_spec.o 00:03:35.430 CXX test/cpp_headers/nvmf_transport.o 00:03:35.430 LINK bdevio 00:03:35.430 CXX test/cpp_headers/opal.o 00:03:35.430 CXX test/cpp_headers/opal_spec.o 00:03:35.430 CXX test/cpp_headers/pci_ids.o 00:03:35.688 CXX test/cpp_headers/pipe.o 00:03:35.688 CXX test/cpp_headers/queue.o 00:03:35.688 CXX test/cpp_headers/reduce.o 00:03:35.688 CXX test/cpp_headers/rpc.o 00:03:35.688 CXX test/cpp_headers/scheduler.o 00:03:35.688 CXX test/cpp_headers/scsi.o 00:03:35.688 CXX test/cpp_headers/scsi_spec.o 00:03:35.688 CXX test/cpp_headers/sock.o 00:03:35.688 CXX test/cpp_headers/stdinc.o 00:03:35.688 CXX test/cpp_headers/string.o 00:03:35.688 CXX test/cpp_headers/thread.o 00:03:35.688 CXX test/cpp_headers/trace.o 00:03:35.946 CXX test/cpp_headers/trace_parser.o 00:03:35.946 CXX test/cpp_headers/tree.o 00:03:35.946 CXX test/cpp_headers/ublk.o 00:03:35.946 CXX test/cpp_headers/util.o 00:03:35.946 CXX test/cpp_headers/uuid.o 00:03:35.946 CXX test/cpp_headers/version.o 00:03:35.946 CXX test/cpp_headers/vfio_user_pci.o 00:03:35.946 CXX test/cpp_headers/vfio_user_spec.o 00:03:35.946 CXX test/cpp_headers/vhost.o 00:03:35.946 CXX test/cpp_headers/vmd.o 00:03:35.946 LINK cuse 00:03:35.946 CXX test/cpp_headers/xor.o 00:03:35.946 CXX test/cpp_headers/zipf.o 00:03:40.134 LINK esnap 00:03:40.134 00:03:40.134 real 1m8.530s 00:03:40.134 user 6m55.358s 00:03:40.134 sys 1m37.414s 00:03:40.134 12:10:54 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:40.134 12:10:54 make -- common/autotest_common.sh@10 -- $ set +x 00:03:40.134 ************************************ 00:03:40.134 END TEST make 00:03:40.134 ************************************ 00:03:40.134 12:10:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:40.134 12:10:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:40.134 12:10:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:40.134 12:10:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.134 12:10:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:40.134 12:10:54 -- pm/common@44 -- $ pid=5196 00:03:40.134 12:10:54 -- pm/common@50 -- $ kill -TERM 5196 00:03:40.134 12:10:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.134 12:10:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:40.134 12:10:54 -- pm/common@44 -- $ pid=5198 00:03:40.134 12:10:54 -- pm/common@50 -- $ kill -TERM 5198 00:03:40.134 12:10:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:40.134 12:10:54 -- nvmf/common.sh@7 -- # uname -s 00:03:40.134 12:10:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:40.134 12:10:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:40.134 12:10:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:40.134 12:10:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:40.134 12:10:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:40.134 12:10:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:40.134 12:10:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:40.134 12:10:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:40.134 12:10:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:40.134 12:10:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:40.134 12:10:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:03:40.134 12:10:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:03:40.134 12:10:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:40.134 12:10:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:40.134 12:10:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:40.134 12:10:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:40.134 12:10:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:40.134 12:10:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:40.134 12:10:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.134 12:10:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.134 12:10:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.134 12:10:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.134 12:10:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.134 12:10:54 -- paths/export.sh@5 -- # export PATH 00:03:40.134 12:10:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.134 12:10:54 -- nvmf/common.sh@47 -- # : 0 00:03:40.134 12:10:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:40.134 12:10:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:40.134 12:10:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:40.134 12:10:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:40.134 12:10:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:40.134 12:10:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:40.134 12:10:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:40.134 12:10:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:40.134 12:10:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:40.134 12:10:54 -- spdk/autotest.sh@32 -- # uname -s 00:03:40.134 12:10:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:40.134 12:10:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:40.134 12:10:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.134 12:10:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:40.134 12:10:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.134 12:10:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.134 12:10:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.134 12:10:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:40.134 12:10:54 -- spdk/autotest.sh@48 -- # udevadm_pid=54594 00:03:40.134 12:10:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:40.134 12:10:54 -- pm/common@17 -- # local monitor 00:03:40.134 12:10:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.134 12:10:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:40.134 12:10:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.134 12:10:54 -- pm/common@21 -- # date +%s 00:03:40.134 12:10:54 -- pm/common@21 -- # date +%s 00:03:40.134 12:10:54 -- pm/common@25 -- # sleep 1 00:03:40.134 12:10:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721909454 00:03:40.134 12:10:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721909454 00:03:40.134 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721909454_collect-vmstat.pm.log 00:03:40.134 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721909454_collect-cpu-load.pm.log 00:03:41.069 12:10:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:41.069 12:10:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:41.069 12:10:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:41.069 12:10:55 -- common/autotest_common.sh@10 -- # set +x 00:03:41.069 12:10:55 -- spdk/autotest.sh@59 -- # create_test_list 00:03:41.069 12:10:55 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:41.069 12:10:55 -- common/autotest_common.sh@10 -- # set +x 00:03:41.328 12:10:55 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:41.328 12:10:55 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:41.328 12:10:55 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:41.328 12:10:55 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:41.328 12:10:55 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:41.328 12:10:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:41.328 12:10:55 -- common/autotest_common.sh@1455 -- # uname 00:03:41.328 12:10:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:41.328 12:10:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:41.328 12:10:55 -- common/autotest_common.sh@1475 -- # uname 00:03:41.328 12:10:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:41.328 12:10:55 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:41.328 12:10:55 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:41.328 12:10:55 -- spdk/autotest.sh@72 -- # hash lcov 00:03:41.328 12:10:55 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:41.328 12:10:55 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:41.328 --rc lcov_branch_coverage=1 00:03:41.328 --rc lcov_function_coverage=1 00:03:41.328 --rc genhtml_branch_coverage=1 00:03:41.328 --rc genhtml_function_coverage=1 00:03:41.328 --rc genhtml_legend=1 00:03:41.328 --rc geninfo_all_blocks=1 00:03:41.328 ' 00:03:41.328 12:10:55 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:41.328 --rc lcov_branch_coverage=1 00:03:41.328 --rc lcov_function_coverage=1 00:03:41.328 --rc genhtml_branch_coverage=1 00:03:41.328 --rc genhtml_function_coverage=1 00:03:41.328 --rc genhtml_legend=1 00:03:41.328 --rc geninfo_all_blocks=1 00:03:41.328 ' 00:03:41.328 12:10:55 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:41.328 --rc lcov_branch_coverage=1 00:03:41.328 --rc lcov_function_coverage=1 00:03:41.328 --rc genhtml_branch_coverage=1 00:03:41.328 --rc genhtml_function_coverage=1 00:03:41.328 --rc genhtml_legend=1 00:03:41.328 --rc geninfo_all_blocks=1 00:03:41.328 --no-external' 00:03:41.328 12:10:55 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:41.328 --rc lcov_branch_coverage=1 00:03:41.328 --rc lcov_function_coverage=1 00:03:41.328 --rc genhtml_branch_coverage=1 00:03:41.328 --rc genhtml_function_coverage=1 00:03:41.328 --rc genhtml_legend=1 00:03:41.328 --rc geninfo_all_blocks=1 00:03:41.328 --no-external' 00:03:41.328 12:10:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:41.328 lcov: LCOV version 1.14 00:03:41.328 12:10:56 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:59.409 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.409 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:11.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:11.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:11.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:11.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:13.551 12:11:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:13.551 12:11:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.551 12:11:28 -- common/autotest_common.sh@10 -- # set +x 00:04:13.551 12:11:28 -- spdk/autotest.sh@91 -- # rm -f 00:04:13.551 12:11:28 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.486 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:14.486 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:14.486 12:11:29 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:14.486 12:11:29 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:14.486 12:11:29 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:14.486 12:11:29 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:14.486 12:11:29 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.486 12:11:29 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:14.486 12:11:29 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:14.486 12:11:29 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.486 12:11:29 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.486 12:11:29 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.486 12:11:29 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:14.486 12:11:29 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:14.486 12:11:29 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:14.486 12:11:29 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.486 12:11:29 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.486 12:11:29 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:14.486 12:11:29 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:14.486 12:11:29 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:14.486 12:11:29 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.486 12:11:29 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.486 12:11:29 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:14.486 12:11:29 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:14.486 12:11:29 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:14.486 12:11:29 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.486 12:11:29 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:14.486 12:11:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.486 12:11:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:14.486 12:11:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:14.486 12:11:29 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:14.486 12:11:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:14.486 No valid GPT data, bailing 00:04:14.486 12:11:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.486 12:11:29 -- scripts/common.sh@391 -- # pt= 00:04:14.486 12:11:29 -- scripts/common.sh@392 -- # return 1 00:04:14.486 12:11:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:14.486 1+0 records in 00:04:14.486 1+0 records out 00:04:14.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446693 s, 235 MB/s 00:04:14.486 12:11:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.486 12:11:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:14.486 12:11:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:14.486 12:11:29 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:14.486 12:11:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:14.486 No valid GPT data, bailing 00:04:14.486 12:11:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:14.487 12:11:29 -- scripts/common.sh@391 -- # pt= 00:04:14.487 12:11:29 -- scripts/common.sh@392 -- # return 1 00:04:14.487 12:11:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:14.487 1+0 records in 00:04:14.487 1+0 records out 00:04:14.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041065 s, 255 MB/s 00:04:14.487 12:11:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.487 12:11:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:14.487 12:11:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:14.487 12:11:29 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:14.487 12:11:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:14.487 No valid GPT data, bailing 00:04:14.745 12:11:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:14.745 12:11:29 -- scripts/common.sh@391 -- # pt= 00:04:14.745 12:11:29 -- scripts/common.sh@392 -- # return 1 00:04:14.745 12:11:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:14.745 1+0 records in 00:04:14.745 1+0 records out 00:04:14.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512255 s, 205 MB/s 00:04:14.745 12:11:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.745 12:11:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:14.745 12:11:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:14.745 12:11:29 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:14.745 12:11:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:14.745 No valid GPT data, bailing 00:04:14.745 12:11:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:14.745 12:11:29 -- scripts/common.sh@391 -- # pt= 00:04:14.745 12:11:29 -- scripts/common.sh@392 -- # return 1 00:04:14.745 12:11:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:14.745 1+0 records in 00:04:14.745 1+0 records out 00:04:14.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447673 s, 234 MB/s 00:04:14.745 12:11:29 -- spdk/autotest.sh@118 -- # sync 00:04:14.745 12:11:29 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:14.745 12:11:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:14.745 12:11:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:16.645 12:11:31 -- spdk/autotest.sh@124 -- # uname -s 00:04:16.645 12:11:31 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:16.645 12:11:31 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:16.645 12:11:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.645 12:11:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.645 12:11:31 -- common/autotest_common.sh@10 -- # set +x 00:04:16.645 ************************************ 00:04:16.645 START TEST setup.sh 00:04:16.645 ************************************ 00:04:16.645 12:11:31 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:16.645 * Looking for test storage... 00:04:16.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:16.645 12:11:31 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:16.645 12:11:31 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:16.645 12:11:31 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:16.645 12:11:31 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.645 12:11:31 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.645 12:11:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:16.645 ************************************ 00:04:16.645 START TEST acl 00:04:16.645 ************************************ 00:04:16.645 12:11:31 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:16.645 * Looking for test storage... 00:04:16.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:16.646 12:11:31 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:16.646 12:11:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.646 12:11:31 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:16.646 12:11:31 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:16.646 12:11:31 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:16.646 12:11:31 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:16.646 12:11:31 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:16.646 12:11:31 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.646 12:11:31 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.592 12:11:32 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:17.592 12:11:32 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:17.592 12:11:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.592 12:11:32 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:17.592 12:11:32 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.592 12:11:32 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.160 Hugepages 00:04:18.160 node hugesize free / total 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.160 00:04:18.160 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:18.160 12:11:32 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:18.160 12:11:32 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.160 12:11:32 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.160 12:11:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:18.160 ************************************ 00:04:18.160 START TEST denied 00:04:18.160 ************************************ 00:04:18.160 12:11:33 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:18.160 12:11:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:18.160 12:11:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:18.160 12:11:33 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:18.160 12:11:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.160 12:11:33 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.093 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:19.093 12:11:33 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:19.093 12:11:33 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:19.093 12:11:33 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:19.093 12:11:33 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:19.093 12:11:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:19.093 12:11:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:19.093 12:11:33 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:19.093 12:11:33 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:19.093 12:11:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.093 12:11:33 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.658 00:04:19.658 real 0m1.389s 00:04:19.658 user 0m0.555s 00:04:19.658 sys 0m0.791s 00:04:19.658 12:11:34 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.658 12:11:34 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:19.658 ************************************ 00:04:19.658 END TEST denied 00:04:19.658 ************************************ 00:04:19.658 12:11:34 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:19.658 12:11:34 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.658 12:11:34 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.658 12:11:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:19.658 ************************************ 00:04:19.658 START TEST allowed 00:04:19.658 ************************************ 00:04:19.658 12:11:34 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:19.658 12:11:34 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:19.658 12:11:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:19.658 12:11:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:19.658 12:11:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.658 12:11:34 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:20.610 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.610 12:11:35 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:20.610 12:11:35 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:20.610 12:11:35 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:20.610 12:11:35 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:20.610 12:11:35 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:20.610 12:11:35 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:20.610 12:11:35 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:20.610 12:11:35 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:20.610 12:11:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.610 12:11:35 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.177 00:04:21.177 real 0m1.485s 00:04:21.177 user 0m0.635s 00:04:21.177 sys 0m0.836s 00:04:21.177 12:11:35 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.177 ************************************ 00:04:21.177 12:11:35 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:21.177 END TEST allowed 00:04:21.177 ************************************ 00:04:21.177 00:04:21.177 real 0m4.575s 00:04:21.177 user 0m1.974s 00:04:21.177 sys 0m2.555s 00:04:21.177 12:11:35 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.177 12:11:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:21.177 ************************************ 00:04:21.177 END TEST acl 00:04:21.177 ************************************ 00:04:21.177 12:11:35 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:21.177 12:11:35 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.177 12:11:35 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.177 12:11:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.177 ************************************ 00:04:21.177 START TEST hugepages 00:04:21.177 ************************************ 00:04:21.177 12:11:36 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:21.438 * Looking for test storage... 00:04:21.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5879080 kB' 'MemAvailable: 7392456 kB' 'Buffers: 2436 kB' 'Cached: 1724756 kB' 'SwapCached: 0 kB' 'Active: 477416 kB' 'Inactive: 1354504 kB' 'Active(anon): 115216 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 106336 kB' 'Mapped: 48612 kB' 'Shmem: 10488 kB' 'KReclaimable: 67220 kB' 'Slab: 141768 kB' 'SReclaimable: 67220 kB' 'SUnreclaim: 74548 kB' 'KernelStack: 6524 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 336624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.438 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.439 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.440 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.440 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.440 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:21.440 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:21.440 12:11:36 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:21.440 12:11:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.440 12:11:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.440 12:11:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.440 ************************************ 00:04:21.440 START TEST default_setup 00:04:21.440 ************************************ 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.440 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.008 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.272 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.272 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976892 kB' 'MemAvailable: 9490128 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 494052 kB' 'Inactive: 1354508 kB' 'Active(anon): 131852 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354508 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 24 kB' 'AnonPages: 123012 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 66936 kB' 'Slab: 141432 kB' 'SReclaimable: 66936 kB' 'SUnreclaim: 74496 kB' 'KernelStack: 6464 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.273 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7977340 kB' 'MemAvailable: 9490576 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 493900 kB' 'Inactive: 1354508 kB' 'Active(anon): 131700 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354508 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 24 kB' 'AnonPages: 122868 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 66936 kB' 'Slab: 141432 kB' 'SReclaimable: 66936 kB' 'SUnreclaim: 74496 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.274 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.275 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976584 kB' 'MemAvailable: 9489820 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 493900 kB' 'Inactive: 1354508 kB' 'Active(anon): 131700 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354508 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122868 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 66936 kB' 'Slab: 141432 kB' 'SReclaimable: 66936 kB' 'SUnreclaim: 74496 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.276 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.277 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:22.278 nr_hugepages=1024 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.278 resv_hugepages=0 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.278 surplus_hugepages=0 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.278 anon_hugepages=0 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7977616 kB' 'MemAvailable: 9490852 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 494052 kB' 'Inactive: 1354508 kB' 'Active(anon): 131852 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354508 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123024 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 66936 kB' 'Slab: 141428 kB' 'SReclaimable: 66936 kB' 'SUnreclaim: 74492 kB' 'KernelStack: 6448 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.278 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.279 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7977376 kB' 'MemUsed: 4264600 kB' 'SwapCached: 0 kB' 'Active: 493704 kB' 'Inactive: 1354508 kB' 'Active(anon): 131504 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354508 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1727184 kB' 'Mapped: 48620 kB' 'AnonPages: 122664 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66952 kB' 'Slab: 141444 kB' 'SReclaimable: 66952 kB' 'SUnreclaim: 74492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.280 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.281 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.282 node0=1024 expecting 1024 00:04:22.282 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.282 12:11:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.282 00:04:22.282 real 0m0.954s 00:04:22.282 user 0m0.443s 00:04:22.282 sys 0m0.443s 00:04:22.282 12:11:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.282 12:11:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:22.282 ************************************ 00:04:22.282 END TEST default_setup 00:04:22.282 ************************************ 00:04:22.541 12:11:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:22.541 12:11:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.541 12:11:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.541 12:11:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.541 ************************************ 00:04:22.541 START TEST per_node_1G_alloc 00:04:22.541 ************************************ 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.541 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.945 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.945 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.945 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:22.945 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9024584 kB' 'MemAvailable: 10537840 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 494464 kB' 'Inactive: 1354520 kB' 'Active(anon): 132264 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123408 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141460 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74512 kB' 'KernelStack: 6520 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.946 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9024584 kB' 'MemAvailable: 10537840 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 494612 kB' 'Inactive: 1354520 kB' 'Active(anon): 132412 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123296 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141460 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74512 kB' 'KernelStack: 6504 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.947 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.948 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9024584 kB' 'MemAvailable: 10537840 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 494136 kB' 'Inactive: 1354520 kB' 'Active(anon): 131936 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123044 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141456 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74508 kB' 'KernelStack: 6432 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.949 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.950 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.951 nr_hugepages=512 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:22.951 resv_hugepages=0 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.951 surplus_hugepages=0 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.951 anon_hugepages=0 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.951 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9024584 kB' 'MemAvailable: 10537840 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 493932 kB' 'Inactive: 1354520 kB' 'Active(anon): 131732 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122840 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141456 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74508 kB' 'KernelStack: 6448 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.952 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.953 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9024584 kB' 'MemUsed: 3217392 kB' 'SwapCached: 0 kB' 'Active: 494152 kB' 'Inactive: 1354520 kB' 'Active(anon): 131952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1727184 kB' 'Mapped: 48620 kB' 'AnonPages: 123060 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66948 kB' 'Slab: 141456 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.954 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.955 node0=512 expecting 512 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:22.955 00:04:22.955 real 0m0.518s 00:04:22.955 user 0m0.247s 00:04:22.955 sys 0m0.304s 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.955 12:11:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.955 ************************************ 00:04:22.955 END TEST per_node_1G_alloc 00:04:22.955 ************************************ 00:04:22.955 12:11:37 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:22.955 12:11:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.955 12:11:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.955 12:11:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.955 ************************************ 00:04:22.955 START TEST even_2G_alloc 00:04:22.955 ************************************ 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.955 12:11:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.214 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.215 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7973780 kB' 'MemAvailable: 9487036 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 494276 kB' 'Inactive: 1354520 kB' 'Active(anon): 132076 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123492 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141456 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74508 kB' 'KernelStack: 6568 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.480 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7973780 kB' 'MemAvailable: 9487036 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 494172 kB' 'Inactive: 1354520 kB' 'Active(anon): 131972 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123120 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141452 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74504 kB' 'KernelStack: 6472 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.481 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.482 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7973780 kB' 'MemAvailable: 9487036 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 493976 kB' 'Inactive: 1354520 kB' 'Active(anon): 131776 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123180 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141448 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74500 kB' 'KernelStack: 6472 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.483 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.484 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.485 nr_hugepages=1024 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.485 resv_hugepages=0 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.485 surplus_hugepages=0 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.485 anon_hugepages=0 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.485 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7973780 kB' 'MemAvailable: 9487036 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 493888 kB' 'Inactive: 1354520 kB' 'Active(anon): 131688 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123056 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141436 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74488 kB' 'KernelStack: 6432 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.486 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7973780 kB' 'MemUsed: 4268196 kB' 'SwapCached: 0 kB' 'Active: 494088 kB' 'Inactive: 1354520 kB' 'Active(anon): 131888 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1727184 kB' 'Mapped: 48620 kB' 'AnonPages: 123036 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66948 kB' 'Slab: 141436 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.487 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.488 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.489 node0=1024 expecting 1024 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.489 00:04:23.489 real 0m0.512s 00:04:23.489 user 0m0.245s 00:04:23.489 sys 0m0.299s 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.489 12:11:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:23.489 ************************************ 00:04:23.489 END TEST even_2G_alloc 00:04:23.489 ************************************ 00:04:23.489 12:11:38 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:23.489 12:11:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.489 12:11:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.489 12:11:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.489 ************************************ 00:04:23.489 START TEST odd_alloc 00:04:23.489 ************************************ 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.489 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.011 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.011 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.011 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7968220 kB' 'MemAvailable: 9481480 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 494012 kB' 'Inactive: 1354524 kB' 'Active(anon): 131812 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123180 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141444 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74496 kB' 'KernelStack: 6472 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.012 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7968220 kB' 'MemAvailable: 9481480 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 493988 kB' 'Inactive: 1354524 kB' 'Active(anon): 131788 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123156 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141440 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74492 kB' 'KernelStack: 6440 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.013 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.014 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7967720 kB' 'MemAvailable: 9480980 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 494016 kB' 'Inactive: 1354524 kB' 'Active(anon): 131816 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123188 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141440 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74492 kB' 'KernelStack: 6424 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.015 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.016 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.017 nr_hugepages=1025 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:24.017 resv_hugepages=0 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.017 surplus_hugepages=0 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.017 anon_hugepages=0 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7967720 kB' 'MemAvailable: 9480980 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 494204 kB' 'Inactive: 1354524 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123116 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141436 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74488 kB' 'KernelStack: 6408 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.017 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.018 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7967720 kB' 'MemUsed: 4274256 kB' 'SwapCached: 0 kB' 'Active: 494180 kB' 'Inactive: 1354524 kB' 'Active(anon): 131980 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1727188 kB' 'Mapped: 48744 kB' 'AnonPages: 123092 kB' 'Shmem: 10464 kB' 'KernelStack: 6460 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66948 kB' 'Slab: 141436 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.019 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:24.020 node0=1025 expecting 1025 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:24.020 00:04:24.020 real 0m0.518s 00:04:24.020 user 0m0.251s 00:04:24.020 sys 0m0.300s 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.020 12:11:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.020 ************************************ 00:04:24.020 END TEST odd_alloc 00:04:24.020 ************************************ 00:04:24.020 12:11:38 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:24.020 12:11:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.020 12:11:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.020 12:11:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.020 ************************************ 00:04:24.020 START TEST custom_alloc 00:04:24.020 ************************************ 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.020 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.021 12:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.593 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.593 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9023600 kB' 'MemAvailable: 10536860 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 494528 kB' 'Inactive: 1354524 kB' 'Active(anon): 132328 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123400 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141448 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74500 kB' 'KernelStack: 6452 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.593 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.594 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9023600 kB' 'MemAvailable: 10536860 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 494020 kB' 'Inactive: 1354524 kB' 'Active(anon): 131820 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122952 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141464 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74516 kB' 'KernelStack: 6468 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.595 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.596 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9023600 kB' 'MemAvailable: 10536860 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 493928 kB' 'Inactive: 1354524 kB' 'Active(anon): 131728 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122944 kB' 'Mapped: 48976 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141464 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74516 kB' 'KernelStack: 6500 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.597 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.598 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.599 nr_hugepages=512 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:24.599 resv_hugepages=0 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.599 surplus_hugepages=0 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.599 anon_hugepages=0 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9023600 kB' 'MemAvailable: 10536860 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 493984 kB' 'Inactive: 1354524 kB' 'Active(anon): 131784 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141460 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74512 kB' 'KernelStack: 6400 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.599 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.600 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9023600 kB' 'MemUsed: 3218376 kB' 'SwapCached: 0 kB' 'Active: 493816 kB' 'Inactive: 1354524 kB' 'Active(anon): 131616 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1727188 kB' 'Mapped: 48620 kB' 'AnonPages: 122808 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66948 kB' 'Slab: 141456 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.601 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.602 node0=512 expecting 512 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:24.602 00:04:24.602 real 0m0.524s 00:04:24.602 user 0m0.265s 00:04:24.602 sys 0m0.295s 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.602 12:11:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.602 ************************************ 00:04:24.602 END TEST custom_alloc 00:04:24.602 ************************************ 00:04:24.602 12:11:39 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:24.602 12:11:39 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.602 12:11:39 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.602 12:11:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.602 ************************************ 00:04:24.602 START TEST no_shrink_alloc 00:04:24.602 ************************************ 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:24.602 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.603 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.175 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.175 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7974216 kB' 'MemAvailable: 9487476 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 494516 kB' 'Inactive: 1354524 kB' 'Active(anon): 132316 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123424 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141436 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74488 kB' 'KernelStack: 6488 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.175 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.176 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7974468 kB' 'MemAvailable: 9487728 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 493896 kB' 'Inactive: 1354524 kB' 'Active(anon): 131696 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123100 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141436 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74488 kB' 'KernelStack: 6492 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7974724 kB' 'MemAvailable: 9487980 kB' 'Buffers: 2436 kB' 'Cached: 1724748 kB' 'SwapCached: 0 kB' 'Active: 494352 kB' 'Inactive: 1354520 kB' 'Active(anon): 132152 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123284 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141436 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74488 kB' 'KernelStack: 6444 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.181 nr_hugepages=1024 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.181 resv_hugepages=0 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.181 surplus_hugepages=0 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.181 anon_hugepages=0 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7974724 kB' 'MemAvailable: 9487984 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 493932 kB' 'Inactive: 1354524 kB' 'Active(anon): 131732 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123176 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141432 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74484 kB' 'KernelStack: 6464 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7974724 kB' 'MemUsed: 4267252 kB' 'SwapCached: 0 kB' 'Active: 493948 kB' 'Inactive: 1354524 kB' 'Active(anon): 131748 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1727188 kB' 'Mapped: 48620 kB' 'AnonPages: 123172 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66948 kB' 'Slab: 141428 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.184 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.185 node0=1024 expecting 1024 00:04:25.185 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.185 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:25.185 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:25.185 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:25.185 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.185 12:11:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.443 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.443 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.443 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.706 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7977012 kB' 'MemAvailable: 9490272 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 494576 kB' 'Inactive: 1354524 kB' 'Active(anon): 132376 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123352 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141424 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74476 kB' 'KernelStack: 6628 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.707 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976760 kB' 'MemAvailable: 9490020 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 494028 kB' 'Inactive: 1354524 kB' 'Active(anon): 131828 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122916 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141448 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74500 kB' 'KernelStack: 6480 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.708 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.709 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976760 kB' 'MemAvailable: 9490020 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 494024 kB' 'Inactive: 1354524 kB' 'Active(anon): 131824 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122968 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141444 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74496 kB' 'KernelStack: 6464 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.710 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.711 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.712 nr_hugepages=1024 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.712 resv_hugepages=0 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.712 surplus_hugepages=0 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.712 anon_hugepages=0 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976760 kB' 'MemAvailable: 9490020 kB' 'Buffers: 2436 kB' 'Cached: 1724752 kB' 'SwapCached: 0 kB' 'Active: 494024 kB' 'Inactive: 1354524 kB' 'Active(anon): 131824 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122968 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 66948 kB' 'Slab: 141444 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74496 kB' 'KernelStack: 6464 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 4020224 kB' 'DirectMap1G: 10485760 kB' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.712 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.713 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7976760 kB' 'MemUsed: 4265216 kB' 'SwapCached: 0 kB' 'Active: 493988 kB' 'Inactive: 1354524 kB' 'Active(anon): 131788 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1354524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1727188 kB' 'Mapped: 48624 kB' 'AnonPages: 123184 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66948 kB' 'Slab: 141460 kB' 'SReclaimable: 66948 kB' 'SUnreclaim: 74512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.714 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.715 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.716 node0=1024 expecting 1024 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.716 00:04:25.716 real 0m1.040s 00:04:25.716 user 0m0.548s 00:04:25.716 sys 0m0.558s 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.716 12:11:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.716 ************************************ 00:04:25.716 END TEST no_shrink_alloc 00:04:25.716 ************************************ 00:04:25.716 12:11:40 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:25.716 12:11:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:25.716 12:11:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.716 12:11:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.716 12:11:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.716 12:11:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.716 12:11:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.716 12:11:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:25.716 12:11:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:25.716 ************************************ 00:04:25.716 END TEST hugepages 00:04:25.716 ************************************ 00:04:25.716 00:04:25.716 real 0m4.492s 00:04:25.716 user 0m2.164s 00:04:25.716 sys 0m2.448s 00:04:25.716 12:11:40 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.716 12:11:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.716 12:11:40 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:25.716 12:11:40 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.716 12:11:40 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.716 12:11:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.716 ************************************ 00:04:25.716 START TEST driver 00:04:25.716 ************************************ 00:04:25.716 12:11:40 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:25.975 * Looking for test storage... 00:04:25.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:25.975 12:11:40 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:25.975 12:11:40 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.975 12:11:40 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.543 12:11:41 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:26.543 12:11:41 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.543 12:11:41 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.543 12:11:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:26.543 ************************************ 00:04:26.543 START TEST guess_driver 00:04:26.543 ************************************ 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:26.543 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:26.543 Looking for driver=uio_pci_generic 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.543 12:11:41 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.109 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:27.109 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:27.109 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.110 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.110 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:27.110 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.110 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.110 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:27.110 12:11:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.392 12:11:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:27.392 12:11:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:27.392 12:11:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.392 12:11:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.959 00:04:27.959 real 0m1.402s 00:04:27.959 user 0m0.518s 00:04:27.959 sys 0m0.884s 00:04:27.959 12:11:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.959 12:11:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:27.959 ************************************ 00:04:27.959 END TEST guess_driver 00:04:27.959 ************************************ 00:04:27.959 00:04:27.959 real 0m2.061s 00:04:27.959 user 0m0.736s 00:04:27.959 sys 0m1.381s 00:04:27.959 12:11:42 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.959 12:11:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:27.959 ************************************ 00:04:27.959 END TEST driver 00:04:27.959 ************************************ 00:04:27.959 12:11:42 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:27.959 12:11:42 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.959 12:11:42 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.959 12:11:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.959 ************************************ 00:04:27.959 START TEST devices 00:04:27.959 ************************************ 00:04:27.959 12:11:42 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:27.959 * Looking for test storage... 00:04:27.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:27.959 12:11:42 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:27.959 12:11:42 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:27.959 12:11:42 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.959 12:11:42 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:28.893 12:11:43 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:28.893 12:11:43 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:28.893 12:11:43 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:28.893 No valid GPT data, bailing 00:04:28.893 12:11:43 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:28.893 12:11:43 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:28.893 12:11:43 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:28.893 12:11:43 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:28.893 12:11:43 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:28.893 12:11:43 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:28.893 12:11:43 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:28.894 12:11:43 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:28.894 12:11:43 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:28.894 No valid GPT data, bailing 00:04:28.894 12:11:43 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:28.894 12:11:43 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:28.894 12:11:43 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:28.894 12:11:43 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:28.894 12:11:43 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:28.894 12:11:43 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:28.894 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:28.894 12:11:43 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:28.894 12:11:43 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:28.894 No valid GPT data, bailing 00:04:28.894 12:11:43 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:29.152 12:11:43 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:29.152 12:11:43 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:29.152 12:11:43 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:29.152 12:11:43 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:29.152 12:11:43 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:29.152 12:11:43 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:29.152 12:11:43 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:29.152 No valid GPT data, bailing 00:04:29.152 12:11:43 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:29.152 12:11:43 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:29.152 12:11:43 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:29.152 12:11:43 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:29.152 12:11:43 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:29.152 12:11:43 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:29.152 12:11:43 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:29.152 12:11:43 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.152 12:11:43 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.152 12:11:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:29.152 ************************************ 00:04:29.152 START TEST nvme_mount 00:04:29.152 ************************************ 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:29.152 12:11:43 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:30.085 Creating new GPT entries in memory. 00:04:30.085 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.085 other utilities. 00:04:30.085 12:11:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.085 12:11:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.085 12:11:44 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.085 12:11:44 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.086 12:11:44 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:31.058 Creating new GPT entries in memory. 00:04:31.058 The operation has completed successfully. 00:04:31.058 12:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.058 12:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.058 12:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58819 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.317 12:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.317 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.317 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:31.317 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.317 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.317 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.317 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.576 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.835 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.835 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:31.835 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.835 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.835 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.093 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:32.093 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:32.093 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.093 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.093 12:11:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.351 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.351 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.351 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.351 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.610 12:11:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.868 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.868 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:32.868 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.868 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.868 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.868 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.868 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.868 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.868 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.868 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.127 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.127 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.127 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:33.127 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:33.127 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.127 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.127 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.127 12:11:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.127 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.127 00:04:33.127 real 0m3.922s 00:04:33.127 user 0m0.676s 00:04:33.127 sys 0m0.985s 00:04:33.127 12:11:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.127 12:11:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:33.127 ************************************ 00:04:33.127 END TEST nvme_mount 00:04:33.127 ************************************ 00:04:33.127 12:11:47 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:33.127 12:11:47 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.127 12:11:47 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.127 12:11:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.127 ************************************ 00:04:33.127 START TEST dm_mount 00:04:33.127 ************************************ 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.127 12:11:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:34.062 Creating new GPT entries in memory. 00:04:34.062 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.062 other utilities. 00:04:34.062 12:11:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.062 12:11:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.062 12:11:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.062 12:11:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.062 12:11:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:35.436 Creating new GPT entries in memory. 00:04:35.436 The operation has completed successfully. 00:04:35.436 12:11:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.436 12:11:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.436 12:11:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.436 12:11:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.436 12:11:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:36.371 The operation has completed successfully. 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59255 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.371 12:11:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.371 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.371 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:36.371 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:36.372 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.372 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.372 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.630 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.630 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.630 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.631 12:11:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.889 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.889 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:36.889 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:36.889 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.889 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.889 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:37.147 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.147 12:11:51 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:37.147 00:04:37.147 real 0m4.171s 00:04:37.147 user 0m0.449s 00:04:37.147 sys 0m0.680s 00:04:37.147 12:11:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.147 ************************************ 00:04:37.147 END TEST dm_mount 00:04:37.147 ************************************ 00:04:37.147 12:11:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.405 12:11:52 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:37.405 12:11:52 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:37.405 12:11:52 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.405 12:11:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.405 12:11:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:37.405 12:11:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.405 12:11:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.662 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:37.662 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:37.662 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:37.662 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:37.662 12:11:52 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:37.662 12:11:52 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.662 12:11:52 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.662 12:11:52 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.662 12:11:52 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.662 12:11:52 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.662 12:11:52 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:37.662 ************************************ 00:04:37.662 END TEST devices 00:04:37.662 ************************************ 00:04:37.662 00:04:37.662 real 0m9.679s 00:04:37.662 user 0m1.813s 00:04:37.662 sys 0m2.273s 00:04:37.662 12:11:52 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.662 12:11:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.662 ************************************ 00:04:37.662 END TEST setup.sh 00:04:37.662 ************************************ 00:04:37.662 00:04:37.662 real 0m21.089s 00:04:37.662 user 0m6.783s 00:04:37.662 sys 0m8.836s 00:04:37.662 12:11:52 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.662 12:11:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.662 12:11:52 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:38.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.262 Hugepages 00:04:38.262 node hugesize free / total 00:04:38.262 node0 1048576kB 0 / 0 00:04:38.262 node0 2048kB 2048 / 2048 00:04:38.262 00:04:38.262 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.262 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:38.533 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:38.533 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:38.533 12:11:53 -- spdk/autotest.sh@130 -- # uname -s 00:04:38.533 12:11:53 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:38.533 12:11:53 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:38.533 12:11:53 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.099 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.356 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.356 12:11:54 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:40.291 12:11:55 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:40.291 12:11:55 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:40.291 12:11:55 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:40.291 12:11:55 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:40.291 12:11:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:40.291 12:11:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:40.291 12:11:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.291 12:11:55 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:40.291 12:11:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:40.291 12:11:55 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:40.291 12:11:55 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:40.291 12:11:55 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.857 Waiting for block devices as requested 00:04:40.857 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.857 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.857 12:11:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:40.857 12:11:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:40.857 12:11:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.857 12:11:55 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:40.857 12:11:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:40.858 12:11:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:40.858 12:11:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:40.858 12:11:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:40.858 12:11:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:40.858 12:11:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:40.858 12:11:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:40.858 12:11:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:40.858 12:11:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:40.858 12:11:55 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:40.858 12:11:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:40.858 12:11:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:40.858 12:11:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:40.858 12:11:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:40.858 12:11:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:40.858 12:11:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:40.858 12:11:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:40.858 12:11:55 -- common/autotest_common.sh@1557 -- # continue 00:04:40.858 12:11:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:40.858 12:11:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:40.858 12:11:55 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:40.858 12:11:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:41.115 12:11:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:41.115 12:11:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:41.115 12:11:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:41.115 12:11:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:41.115 12:11:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:41.115 12:11:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:41.115 12:11:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:41.115 12:11:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:41.115 12:11:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:41.115 12:11:55 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:41.115 12:11:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:41.115 12:11:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:41.115 12:11:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:41.115 12:11:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:41.115 12:11:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:41.115 12:11:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:41.115 12:11:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:41.115 12:11:55 -- common/autotest_common.sh@1557 -- # continue 00:04:41.115 12:11:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:41.116 12:11:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.116 12:11:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.116 12:11:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:41.116 12:11:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.116 12:11:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.116 12:11:55 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.681 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.000 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.000 12:11:56 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:42.000 12:11:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:42.000 12:11:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.000 12:11:56 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:42.000 12:11:56 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:42.000 12:11:56 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:42.000 12:11:56 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:42.000 12:11:56 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:42.000 12:11:56 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:42.000 12:11:56 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:42.000 12:11:56 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:42.000 12:11:56 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:42.000 12:11:56 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:42.000 12:11:56 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:42.000 12:11:56 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:42.000 12:11:56 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:42.000 12:11:56 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:42.000 12:11:56 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:42.000 12:11:56 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:42.000 12:11:56 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:42.000 12:11:56 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:42.000 12:11:56 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:42.000 12:11:56 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:42.000 12:11:56 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:42.000 12:11:56 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:42.000 12:11:56 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:42.000 12:11:56 -- common/autotest_common.sh@1593 -- # return 0 00:04:42.000 12:11:56 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:42.000 12:11:56 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:42.000 12:11:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.000 12:11:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.000 12:11:56 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:42.000 12:11:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.001 12:11:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.001 12:11:56 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:42.001 12:11:56 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:42.001 12:11:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.001 12:11:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.001 12:11:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.001 ************************************ 00:04:42.001 START TEST env 00:04:42.001 ************************************ 00:04:42.001 12:11:56 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:42.259 * Looking for test storage... 00:04:42.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:42.259 12:11:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:42.259 12:11:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.259 12:11:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.259 12:11:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.259 ************************************ 00:04:42.259 START TEST env_memory 00:04:42.259 ************************************ 00:04:42.259 12:11:56 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:42.259 00:04:42.259 00:04:42.259 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.259 http://cunit.sourceforge.net/ 00:04:42.259 00:04:42.259 00:04:42.259 Suite: memory 00:04:42.259 Test: alloc and free memory map ...[2024-07-25 12:11:56.930427] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.259 passed 00:04:42.259 Test: mem map translation ...[2024-07-25 12:11:56.961529] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.259 [2024-07-25 12:11:56.961594] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.259 [2024-07-25 12:11:56.961650] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.259 [2024-07-25 12:11:56.961664] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.259 passed 00:04:42.259 Test: mem map registration ...[2024-07-25 12:11:57.025487] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:42.259 [2024-07-25 12:11:57.025547] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:42.259 passed 00:04:42.259 Test: mem map adjacent registrations ...passed 00:04:42.259 00:04:42.259 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.259 suites 1 1 n/a 0 0 00:04:42.259 tests 4 4 4 0 0 00:04:42.259 asserts 152 152 152 0 n/a 00:04:42.259 00:04:42.259 Elapsed time = 0.215 seconds 00:04:42.259 00:04:42.259 real 0m0.234s 00:04:42.259 user 0m0.215s 00:04:42.259 sys 0m0.016s 00:04:42.259 12:11:57 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.259 12:11:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:42.259 ************************************ 00:04:42.259 END TEST env_memory 00:04:42.259 ************************************ 00:04:42.517 12:11:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.517 12:11:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.517 12:11:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.517 12:11:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.517 ************************************ 00:04:42.517 START TEST env_vtophys 00:04:42.517 ************************************ 00:04:42.517 12:11:57 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.517 EAL: lib.eal log level changed from notice to debug 00:04:42.517 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.517 EAL: Detected lcore 1 as core 0 on socket 0 00:04:42.517 EAL: Detected lcore 2 as core 0 on socket 0 00:04:42.517 EAL: Detected lcore 3 as core 0 on socket 0 00:04:42.517 EAL: Detected lcore 4 as core 0 on socket 0 00:04:42.517 EAL: Detected lcore 5 as core 0 on socket 0 00:04:42.517 EAL: Detected lcore 6 as core 0 on socket 0 00:04:42.517 EAL: Detected lcore 7 as core 0 on socket 0 00:04:42.517 EAL: Detected lcore 8 as core 0 on socket 0 00:04:42.518 EAL: Detected lcore 9 as core 0 on socket 0 00:04:42.518 EAL: Maximum logical cores by configuration: 128 00:04:42.518 EAL: Detected CPU lcores: 10 00:04:42.518 EAL: Detected NUMA nodes: 1 00:04:42.518 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:42.518 EAL: Detected shared linkage of DPDK 00:04:42.518 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.518 EAL: Selected IOVA mode 'PA' 00:04:42.518 EAL: Probing VFIO support... 00:04:42.518 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.518 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:42.518 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.518 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.518 EAL: Setting up physically contiguous memory... 00:04:42.518 EAL: Setting maximum number of open files to 524288 00:04:42.518 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.518 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.518 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.518 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.518 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.518 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.518 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.518 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.518 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.518 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.518 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.518 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.518 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.518 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.518 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.518 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.518 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.518 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.518 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.518 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.518 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.518 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.518 EAL: Hugepages will be freed exactly as allocated. 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: TSC frequency is ~2200000 KHz 00:04:42.518 EAL: Main lcore 0 is ready (tid=7f379ee75a00;cpuset=[0]) 00:04:42.518 EAL: Trying to obtain current memory policy. 00:04:42.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.518 EAL: Restoring previous memory policy: 0 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.518 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.518 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.518 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.518 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:42.518 00:04:42.518 00:04:42.518 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.518 http://cunit.sourceforge.net/ 00:04:42.518 00:04:42.518 00:04:42.518 Suite: components_suite 00:04:42.518 Test: vtophys_malloc_test ...passed 00:04:42.518 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.518 EAL: Restoring previous memory policy: 4 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.518 EAL: Trying to obtain current memory policy. 00:04:42.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.518 EAL: Restoring previous memory policy: 4 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.518 EAL: Trying to obtain current memory policy. 00:04:42.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.518 EAL: Restoring previous memory policy: 4 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.518 EAL: Trying to obtain current memory policy. 00:04:42.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.518 EAL: Restoring previous memory policy: 4 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.518 EAL: Trying to obtain current memory policy. 00:04:42.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.518 EAL: Restoring previous memory policy: 4 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.518 EAL: Trying to obtain current memory policy. 00:04:42.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.518 EAL: Restoring previous memory policy: 4 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.518 EAL: request: mp_malloc_sync 00:04:42.518 EAL: No shared files mode enabled, IPC is disabled 00:04:42.518 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.776 EAL: request: mp_malloc_sync 00:04:42.776 EAL: No shared files mode enabled, IPC is disabled 00:04:42.776 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.776 EAL: Trying to obtain current memory policy. 00:04:42.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.776 EAL: Restoring previous memory policy: 4 00:04:42.776 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.776 EAL: request: mp_malloc_sync 00:04:42.776 EAL: No shared files mode enabled, IPC is disabled 00:04:42.776 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.776 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.776 EAL: request: mp_malloc_sync 00:04:42.776 EAL: No shared files mode enabled, IPC is disabled 00:04:42.776 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.776 EAL: Trying to obtain current memory policy. 00:04:42.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.776 EAL: Restoring previous memory policy: 4 00:04:42.776 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.776 EAL: request: mp_malloc_sync 00:04:42.776 EAL: No shared files mode enabled, IPC is disabled 00:04:42.776 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.776 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.034 EAL: request: mp_malloc_sync 00:04:43.034 EAL: No shared files mode enabled, IPC is disabled 00:04:43.034 EAL: Heap on socket 0 was shrunk by 258MB 00:04:43.034 EAL: Trying to obtain current memory policy. 00:04:43.034 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.034 EAL: Restoring previous memory policy: 4 00:04:43.034 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.034 EAL: request: mp_malloc_sync 00:04:43.034 EAL: No shared files mode enabled, IPC is disabled 00:04:43.034 EAL: Heap on socket 0 was expanded by 514MB 00:04:43.034 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.291 EAL: request: mp_malloc_sync 00:04:43.291 EAL: No shared files mode enabled, IPC is disabled 00:04:43.291 EAL: Heap on socket 0 was shrunk by 514MB 00:04:43.291 EAL: Trying to obtain current memory policy. 00:04:43.291 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.549 EAL: Restoring previous memory policy: 4 00:04:43.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.549 EAL: request: mp_malloc_sync 00:04:43.549 EAL: No shared files mode enabled, IPC is disabled 00:04:43.549 EAL: Heap on socket 0 was expanded by 1026MB 00:04:43.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.067 EAL: request: mp_malloc_sync 00:04:44.067 EAL: No shared files mode enabled, IPC is disabled 00:04:44.067 passed 00:04:44.067 00:04:44.067 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.067 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.067 suites 1 1 n/a 0 0 00:04:44.067 tests 2 2 2 0 0 00:04:44.067 asserts 5358 5358 5358 0 n/a 00:04:44.067 00:04:44.067 Elapsed time = 1.350 seconds 00:04:44.067 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.067 EAL: request: mp_malloc_sync 00:04:44.067 EAL: No shared files mode enabled, IPC is disabled 00:04:44.067 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.067 EAL: No shared files mode enabled, IPC is disabled 00:04:44.067 EAL: No shared files mode enabled, IPC is disabled 00:04:44.067 EAL: No shared files mode enabled, IPC is disabled 00:04:44.067 00:04:44.067 real 0m1.546s 00:04:44.067 user 0m0.848s 00:04:44.067 sys 0m0.567s 00:04:44.067 12:11:58 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.067 12:11:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:44.067 ************************************ 00:04:44.067 END TEST env_vtophys 00:04:44.067 ************************************ 00:04:44.067 12:11:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:44.067 12:11:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.067 12:11:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.067 12:11:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.067 ************************************ 00:04:44.067 START TEST env_pci 00:04:44.067 ************************************ 00:04:44.067 12:11:58 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:44.067 00:04:44.067 00:04:44.067 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.067 http://cunit.sourceforge.net/ 00:04:44.067 00:04:44.067 00:04:44.067 Suite: pci 00:04:44.067 Test: pci_hook ...[2024-07-25 12:11:58.772393] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60449 has claimed it 00:04:44.067 passed 00:04:44.067 00:04:44.067 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.067 suites 1 1 n/a 0 0 00:04:44.067 tests 1 1 1 0 0 00:04:44.067 asserts 25 25 25 0 n/a 00:04:44.067 00:04:44.067 Elapsed time = 0.002 seconds 00:04:44.067 EAL: Cannot find device (10000:00:01.0) 00:04:44.067 EAL: Failed to attach device on primary process 00:04:44.067 00:04:44.067 real 0m0.020s 00:04:44.067 user 0m0.012s 00:04:44.067 sys 0m0.008s 00:04:44.067 12:11:58 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.067 ************************************ 00:04:44.067 END TEST env_pci 00:04:44.067 ************************************ 00:04:44.067 12:11:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:44.067 12:11:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:44.067 12:11:58 env -- env/env.sh@15 -- # uname 00:04:44.067 12:11:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:44.067 12:11:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:44.067 12:11:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.067 12:11:58 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:44.067 12:11:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.067 12:11:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.067 ************************************ 00:04:44.067 START TEST env_dpdk_post_init 00:04:44.067 ************************************ 00:04:44.067 12:11:58 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.067 EAL: Detected CPU lcores: 10 00:04:44.067 EAL: Detected NUMA nodes: 1 00:04:44.067 EAL: Detected shared linkage of DPDK 00:04:44.067 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.067 EAL: Selected IOVA mode 'PA' 00:04:44.327 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.327 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:44.327 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:44.327 Starting DPDK initialization... 00:04:44.327 Starting SPDK post initialization... 00:04:44.327 SPDK NVMe probe 00:04:44.327 Attaching to 0000:00:10.0 00:04:44.327 Attaching to 0000:00:11.0 00:04:44.327 Attached to 0000:00:10.0 00:04:44.327 Attached to 0000:00:11.0 00:04:44.327 Cleaning up... 00:04:44.327 00:04:44.327 real 0m0.179s 00:04:44.327 user 0m0.048s 00:04:44.327 sys 0m0.031s 00:04:44.327 12:11:59 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.327 ************************************ 00:04:44.327 END TEST env_dpdk_post_init 00:04:44.327 ************************************ 00:04:44.327 12:11:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.327 12:11:59 env -- env/env.sh@26 -- # uname 00:04:44.327 12:11:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:44.327 12:11:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.327 12:11:59 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.327 12:11:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.327 12:11:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.327 ************************************ 00:04:44.327 START TEST env_mem_callbacks 00:04:44.327 ************************************ 00:04:44.327 12:11:59 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.327 EAL: Detected CPU lcores: 10 00:04:44.327 EAL: Detected NUMA nodes: 1 00:04:44.327 EAL: Detected shared linkage of DPDK 00:04:44.327 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.327 EAL: Selected IOVA mode 'PA' 00:04:44.586 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.586 00:04:44.586 00:04:44.586 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.586 http://cunit.sourceforge.net/ 00:04:44.586 00:04:44.586 00:04:44.586 Suite: memory 00:04:44.586 Test: test ... 00:04:44.586 register 0x200000200000 2097152 00:04:44.586 malloc 3145728 00:04:44.586 register 0x200000400000 4194304 00:04:44.586 buf 0x200000500000 len 3145728 PASSED 00:04:44.586 malloc 64 00:04:44.586 buf 0x2000004fff40 len 64 PASSED 00:04:44.586 malloc 4194304 00:04:44.586 register 0x200000800000 6291456 00:04:44.586 buf 0x200000a00000 len 4194304 PASSED 00:04:44.586 free 0x200000500000 3145728 00:04:44.586 free 0x2000004fff40 64 00:04:44.586 unregister 0x200000400000 4194304 PASSED 00:04:44.586 free 0x200000a00000 4194304 00:04:44.586 unregister 0x200000800000 6291456 PASSED 00:04:44.586 malloc 8388608 00:04:44.586 register 0x200000400000 10485760 00:04:44.586 buf 0x200000600000 len 8388608 PASSED 00:04:44.586 free 0x200000600000 8388608 00:04:44.586 unregister 0x200000400000 10485760 PASSED 00:04:44.586 passed 00:04:44.586 00:04:44.586 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.586 suites 1 1 n/a 0 0 00:04:44.586 tests 1 1 1 0 0 00:04:44.586 asserts 15 15 15 0 n/a 00:04:44.586 00:04:44.586 Elapsed time = 0.009 seconds 00:04:44.586 00:04:44.586 real 0m0.143s 00:04:44.586 user 0m0.016s 00:04:44.586 sys 0m0.025s 00:04:44.586 12:11:59 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.586 12:11:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:44.586 ************************************ 00:04:44.586 END TEST env_mem_callbacks 00:04:44.586 ************************************ 00:04:44.586 00:04:44.586 real 0m2.463s 00:04:44.586 user 0m1.266s 00:04:44.586 sys 0m0.847s 00:04:44.586 12:11:59 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.586 ************************************ 00:04:44.586 END TEST env 00:04:44.586 12:11:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.586 ************************************ 00:04:44.586 12:11:59 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:44.586 12:11:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.586 12:11:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.586 12:11:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.586 ************************************ 00:04:44.586 START TEST rpc 00:04:44.586 ************************************ 00:04:44.586 12:11:59 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:44.586 * Looking for test storage... 00:04:44.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.586 12:11:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60564 00:04:44.586 12:11:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.586 12:11:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60564 00:04:44.586 12:11:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:44.586 12:11:59 rpc -- common/autotest_common.sh@831 -- # '[' -z 60564 ']' 00:04:44.586 12:11:59 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.586 12:11:59 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.586 12:11:59 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.586 12:11:59 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.586 12:11:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.586 [2024-07-25 12:11:59.443340] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:04:44.586 [2024-07-25 12:11:59.443437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60564 ] 00:04:44.845 [2024-07-25 12:11:59.576885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.845 [2024-07-25 12:11:59.690231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:44.845 [2024-07-25 12:11:59.690288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60564' to capture a snapshot of events at runtime. 00:04:44.845 [2024-07-25 12:11:59.690312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:44.845 [2024-07-25 12:11:59.690322] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:44.845 [2024-07-25 12:11:59.690329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60564 for offline analysis/debug. 00:04:44.845 [2024-07-25 12:11:59.690355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.779 12:12:00 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.779 12:12:00 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:45.779 12:12:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.779 12:12:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.779 12:12:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:45.779 12:12:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:45.779 12:12:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.779 12:12:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.779 12:12:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.779 ************************************ 00:04:45.779 START TEST rpc_integrity 00:04:45.779 ************************************ 00:04:45.779 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:45.779 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.779 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.779 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.779 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.779 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.780 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.780 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.780 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.780 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.780 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.780 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.780 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:45.780 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.780 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.780 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.780 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.780 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.780 { 00:04:45.780 "aliases": [ 00:04:45.780 "7276ae76-57d1-45d6-9f9a-78fbb5476f77" 00:04:45.780 ], 00:04:45.780 "assigned_rate_limits": { 00:04:45.780 "r_mbytes_per_sec": 0, 00:04:45.780 "rw_ios_per_sec": 0, 00:04:45.780 "rw_mbytes_per_sec": 0, 00:04:45.780 "w_mbytes_per_sec": 0 00:04:45.780 }, 00:04:45.780 "block_size": 512, 00:04:45.780 "claimed": false, 00:04:45.780 "driver_specific": {}, 00:04:45.780 "memory_domains": [ 00:04:45.780 { 00:04:45.780 "dma_device_id": "system", 00:04:45.780 "dma_device_type": 1 00:04:45.780 }, 00:04:45.780 { 00:04:45.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.780 "dma_device_type": 2 00:04:45.780 } 00:04:45.780 ], 00:04:45.780 "name": "Malloc0", 00:04:45.780 "num_blocks": 16384, 00:04:45.780 "product_name": "Malloc disk", 00:04:45.780 "supported_io_types": { 00:04:45.780 "abort": true, 00:04:45.780 "compare": false, 00:04:45.780 "compare_and_write": false, 00:04:45.780 "copy": true, 00:04:45.780 "flush": true, 00:04:45.780 "get_zone_info": false, 00:04:45.780 "nvme_admin": false, 00:04:45.780 "nvme_io": false, 00:04:45.780 "nvme_io_md": false, 00:04:45.780 "nvme_iov_md": false, 00:04:45.780 "read": true, 00:04:45.780 "reset": true, 00:04:45.780 "seek_data": false, 00:04:45.780 "seek_hole": false, 00:04:45.780 "unmap": true, 00:04:45.780 "write": true, 00:04:45.780 "write_zeroes": true, 00:04:45.780 "zcopy": true, 00:04:45.780 "zone_append": false, 00:04:45.780 "zone_management": false 00:04:45.780 }, 00:04:45.780 "uuid": "7276ae76-57d1-45d6-9f9a-78fbb5476f77", 00:04:45.780 "zoned": false 00:04:45.780 } 00:04:45.780 ]' 00:04:45.780 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.780 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.780 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:45.780 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.780 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.780 [2024-07-25 12:12:00.639529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:45.780 [2024-07-25 12:12:00.639583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.780 [2024-07-25 12:12:00.639602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16c5ad0 00:04:45.780 [2024-07-25 12:12:00.639613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.780 [2024-07-25 12:12:00.641283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.780 [2024-07-25 12:12:00.641333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.038 Passthru0 00:04:46.038 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.038 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.038 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.038 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.038 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.038 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.038 { 00:04:46.038 "aliases": [ 00:04:46.038 "7276ae76-57d1-45d6-9f9a-78fbb5476f77" 00:04:46.038 ], 00:04:46.038 "assigned_rate_limits": { 00:04:46.038 "r_mbytes_per_sec": 0, 00:04:46.038 "rw_ios_per_sec": 0, 00:04:46.038 "rw_mbytes_per_sec": 0, 00:04:46.038 "w_mbytes_per_sec": 0 00:04:46.038 }, 00:04:46.038 "block_size": 512, 00:04:46.038 "claim_type": "exclusive_write", 00:04:46.038 "claimed": true, 00:04:46.038 "driver_specific": {}, 00:04:46.038 "memory_domains": [ 00:04:46.038 { 00:04:46.038 "dma_device_id": "system", 00:04:46.038 "dma_device_type": 1 00:04:46.038 }, 00:04:46.038 { 00:04:46.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.038 "dma_device_type": 2 00:04:46.038 } 00:04:46.038 ], 00:04:46.038 "name": "Malloc0", 00:04:46.038 "num_blocks": 16384, 00:04:46.038 "product_name": "Malloc disk", 00:04:46.038 "supported_io_types": { 00:04:46.038 "abort": true, 00:04:46.038 "compare": false, 00:04:46.038 "compare_and_write": false, 00:04:46.038 "copy": true, 00:04:46.038 "flush": true, 00:04:46.038 "get_zone_info": false, 00:04:46.038 "nvme_admin": false, 00:04:46.038 "nvme_io": false, 00:04:46.038 "nvme_io_md": false, 00:04:46.038 "nvme_iov_md": false, 00:04:46.038 "read": true, 00:04:46.038 "reset": true, 00:04:46.038 "seek_data": false, 00:04:46.038 "seek_hole": false, 00:04:46.038 "unmap": true, 00:04:46.038 "write": true, 00:04:46.038 "write_zeroes": true, 00:04:46.038 "zcopy": true, 00:04:46.038 "zone_append": false, 00:04:46.038 "zone_management": false 00:04:46.038 }, 00:04:46.038 "uuid": "7276ae76-57d1-45d6-9f9a-78fbb5476f77", 00:04:46.038 "zoned": false 00:04:46.038 }, 00:04:46.038 { 00:04:46.038 "aliases": [ 00:04:46.038 "131e7106-1e39-5347-9db2-867aae79fe90" 00:04:46.038 ], 00:04:46.038 "assigned_rate_limits": { 00:04:46.038 "r_mbytes_per_sec": 0, 00:04:46.038 "rw_ios_per_sec": 0, 00:04:46.038 "rw_mbytes_per_sec": 0, 00:04:46.038 "w_mbytes_per_sec": 0 00:04:46.038 }, 00:04:46.039 "block_size": 512, 00:04:46.039 "claimed": false, 00:04:46.039 "driver_specific": { 00:04:46.039 "passthru": { 00:04:46.039 "base_bdev_name": "Malloc0", 00:04:46.039 "name": "Passthru0" 00:04:46.039 } 00:04:46.039 }, 00:04:46.039 "memory_domains": [ 00:04:46.039 { 00:04:46.039 "dma_device_id": "system", 00:04:46.039 "dma_device_type": 1 00:04:46.039 }, 00:04:46.039 { 00:04:46.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.039 "dma_device_type": 2 00:04:46.039 } 00:04:46.039 ], 00:04:46.039 "name": "Passthru0", 00:04:46.039 "num_blocks": 16384, 00:04:46.039 "product_name": "passthru", 00:04:46.039 "supported_io_types": { 00:04:46.039 "abort": true, 00:04:46.039 "compare": false, 00:04:46.039 "compare_and_write": false, 00:04:46.039 "copy": true, 00:04:46.039 "flush": true, 00:04:46.039 "get_zone_info": false, 00:04:46.039 "nvme_admin": false, 00:04:46.039 "nvme_io": false, 00:04:46.039 "nvme_io_md": false, 00:04:46.039 "nvme_iov_md": false, 00:04:46.039 "read": true, 00:04:46.039 "reset": true, 00:04:46.039 "seek_data": false, 00:04:46.039 "seek_hole": false, 00:04:46.039 "unmap": true, 00:04:46.039 "write": true, 00:04:46.039 "write_zeroes": true, 00:04:46.039 "zcopy": true, 00:04:46.039 "zone_append": false, 00:04:46.039 "zone_management": false 00:04:46.039 }, 00:04:46.039 "uuid": "131e7106-1e39-5347-9db2-867aae79fe90", 00:04:46.039 "zoned": false 00:04:46.039 } 00:04:46.039 ]' 00:04:46.039 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.039 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.039 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.039 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.039 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.039 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.039 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.039 ************************************ 00:04:46.039 END TEST rpc_integrity 00:04:46.039 ************************************ 00:04:46.039 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.039 00:04:46.039 real 0m0.318s 00:04:46.039 user 0m0.211s 00:04:46.039 sys 0m0.025s 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.039 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.039 12:12:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:46.039 12:12:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.039 12:12:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.039 12:12:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.039 ************************************ 00:04:46.039 START TEST rpc_plugins 00:04:46.039 ************************************ 00:04:46.039 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:46.039 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:46.039 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.039 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.039 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.039 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:46.039 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:46.039 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.039 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.039 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.039 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:46.039 { 00:04:46.039 "aliases": [ 00:04:46.039 "fd17b714-7db2-45ec-bb6e-0350731dff9b" 00:04:46.039 ], 00:04:46.039 "assigned_rate_limits": { 00:04:46.039 "r_mbytes_per_sec": 0, 00:04:46.039 "rw_ios_per_sec": 0, 00:04:46.039 "rw_mbytes_per_sec": 0, 00:04:46.039 "w_mbytes_per_sec": 0 00:04:46.039 }, 00:04:46.039 "block_size": 4096, 00:04:46.039 "claimed": false, 00:04:46.039 "driver_specific": {}, 00:04:46.039 "memory_domains": [ 00:04:46.039 { 00:04:46.039 "dma_device_id": "system", 00:04:46.039 "dma_device_type": 1 00:04:46.039 }, 00:04:46.039 { 00:04:46.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.039 "dma_device_type": 2 00:04:46.039 } 00:04:46.039 ], 00:04:46.039 "name": "Malloc1", 00:04:46.039 "num_blocks": 256, 00:04:46.039 "product_name": "Malloc disk", 00:04:46.039 "supported_io_types": { 00:04:46.039 "abort": true, 00:04:46.039 "compare": false, 00:04:46.039 "compare_and_write": false, 00:04:46.039 "copy": true, 00:04:46.039 "flush": true, 00:04:46.039 "get_zone_info": false, 00:04:46.039 "nvme_admin": false, 00:04:46.039 "nvme_io": false, 00:04:46.039 "nvme_io_md": false, 00:04:46.039 "nvme_iov_md": false, 00:04:46.039 "read": true, 00:04:46.039 "reset": true, 00:04:46.039 "seek_data": false, 00:04:46.039 "seek_hole": false, 00:04:46.039 "unmap": true, 00:04:46.039 "write": true, 00:04:46.039 "write_zeroes": true, 00:04:46.039 "zcopy": true, 00:04:46.039 "zone_append": false, 00:04:46.039 "zone_management": false 00:04:46.039 }, 00:04:46.039 "uuid": "fd17b714-7db2-45ec-bb6e-0350731dff9b", 00:04:46.039 "zoned": false 00:04:46.039 } 00:04:46.039 ]' 00:04:46.039 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:46.297 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:46.297 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:46.297 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.297 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.297 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.297 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:46.297 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.297 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.297 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.297 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:46.297 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:46.297 ************************************ 00:04:46.297 END TEST rpc_plugins 00:04:46.297 ************************************ 00:04:46.297 12:12:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:46.297 00:04:46.297 real 0m0.157s 00:04:46.297 user 0m0.103s 00:04:46.297 sys 0m0.017s 00:04:46.297 12:12:01 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.297 12:12:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.298 12:12:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:46.298 12:12:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.298 12:12:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.298 12:12:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.298 ************************************ 00:04:46.298 START TEST rpc_trace_cmd_test 00:04:46.298 ************************************ 00:04:46.298 12:12:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:46.298 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:46.298 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:46.298 12:12:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.298 12:12:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:46.298 12:12:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.298 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:46.298 "bdev": { 00:04:46.298 "mask": "0x8", 00:04:46.298 "tpoint_mask": "0xffffffffffffffff" 00:04:46.298 }, 00:04:46.298 "bdev_nvme": { 00:04:46.298 "mask": "0x4000", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "blobfs": { 00:04:46.298 "mask": "0x80", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "dsa": { 00:04:46.298 "mask": "0x200", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "ftl": { 00:04:46.298 "mask": "0x40", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "iaa": { 00:04:46.298 "mask": "0x1000", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "iscsi_conn": { 00:04:46.298 "mask": "0x2", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "nvme_pcie": { 00:04:46.298 "mask": "0x800", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "nvme_tcp": { 00:04:46.298 "mask": "0x2000", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "nvmf_rdma": { 00:04:46.298 "mask": "0x10", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "nvmf_tcp": { 00:04:46.298 "mask": "0x20", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "scsi": { 00:04:46.298 "mask": "0x4", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "sock": { 00:04:46.298 "mask": "0x8000", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "thread": { 00:04:46.298 "mask": "0x400", 00:04:46.298 "tpoint_mask": "0x0" 00:04:46.298 }, 00:04:46.298 "tpoint_group_mask": "0x8", 00:04:46.298 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60564" 00:04:46.298 }' 00:04:46.298 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:46.298 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:46.298 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:46.556 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:46.556 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:46.556 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:46.556 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:46.556 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:46.556 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:46.556 ************************************ 00:04:46.556 END TEST rpc_trace_cmd_test 00:04:46.556 ************************************ 00:04:46.556 12:12:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:46.556 00:04:46.556 real 0m0.283s 00:04:46.556 user 0m0.253s 00:04:46.556 sys 0m0.019s 00:04:46.557 12:12:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.557 12:12:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:46.557 12:12:01 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:46.557 12:12:01 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:46.557 12:12:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.557 12:12:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.557 12:12:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.557 ************************************ 00:04:46.557 START TEST go_rpc 00:04:46.557 ************************************ 00:04:46.557 12:12:01 rpc.go_rpc -- common/autotest_common.sh@1125 -- # go_rpc 00:04:46.557 12:12:01 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:46.557 12:12:01 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:46.557 12:12:01 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.815 12:12:01 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.815 12:12:01 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.815 12:12:01 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["5bd41805-4a18-4d86-959d-489d5de0e331"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"5bd41805-4a18-4d86-959d-489d5de0e331","zoned":false}]' 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:46.815 12:12:01 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.815 12:12:01 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.815 12:12:01 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:46.815 ************************************ 00:04:46.815 END TEST go_rpc 00:04:46.815 ************************************ 00:04:46.815 12:12:01 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:46.815 00:04:46.815 real 0m0.199s 00:04:46.815 user 0m0.146s 00:04:46.815 sys 0m0.026s 00:04:46.815 12:12:01 rpc.go_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.815 12:12:01 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.815 12:12:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:46.815 12:12:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:46.815 12:12:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.815 12:12:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.815 12:12:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.815 ************************************ 00:04:46.815 START TEST rpc_daemon_integrity 00:04:46.815 ************************************ 00:04:46.815 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:46.815 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.815 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.815 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.815 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.815 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.815 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:47.075 { 00:04:47.075 "aliases": [ 00:04:47.075 "271b4f1d-839d-4afa-993c-b13dade51150" 00:04:47.075 ], 00:04:47.075 "assigned_rate_limits": { 00:04:47.075 "r_mbytes_per_sec": 0, 00:04:47.075 "rw_ios_per_sec": 0, 00:04:47.075 "rw_mbytes_per_sec": 0, 00:04:47.075 "w_mbytes_per_sec": 0 00:04:47.075 }, 00:04:47.075 "block_size": 512, 00:04:47.075 "claimed": false, 00:04:47.075 "driver_specific": {}, 00:04:47.075 "memory_domains": [ 00:04:47.075 { 00:04:47.075 "dma_device_id": "system", 00:04:47.075 "dma_device_type": 1 00:04:47.075 }, 00:04:47.075 { 00:04:47.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.075 "dma_device_type": 2 00:04:47.075 } 00:04:47.075 ], 00:04:47.075 "name": "Malloc3", 00:04:47.075 "num_blocks": 16384, 00:04:47.075 "product_name": "Malloc disk", 00:04:47.075 "supported_io_types": { 00:04:47.075 "abort": true, 00:04:47.075 "compare": false, 00:04:47.075 "compare_and_write": false, 00:04:47.075 "copy": true, 00:04:47.075 "flush": true, 00:04:47.075 "get_zone_info": false, 00:04:47.075 "nvme_admin": false, 00:04:47.075 "nvme_io": false, 00:04:47.075 "nvme_io_md": false, 00:04:47.075 "nvme_iov_md": false, 00:04:47.075 "read": true, 00:04:47.075 "reset": true, 00:04:47.075 "seek_data": false, 00:04:47.075 "seek_hole": false, 00:04:47.075 "unmap": true, 00:04:47.075 "write": true, 00:04:47.075 "write_zeroes": true, 00:04:47.075 "zcopy": true, 00:04:47.075 "zone_append": false, 00:04:47.075 "zone_management": false 00:04:47.075 }, 00:04:47.075 "uuid": "271b4f1d-839d-4afa-993c-b13dade51150", 00:04:47.075 "zoned": false 00:04:47.075 } 00:04:47.075 ]' 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.075 [2024-07-25 12:12:01.776556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:47.075 [2024-07-25 12:12:01.776610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:47.075 [2024-07-25 12:12:01.776634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18bcc50 00:04:47.075 [2024-07-25 12:12:01.776645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:47.075 [2024-07-25 12:12:01.778157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:47.075 [2024-07-25 12:12:01.778193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:47.075 Passthru0 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.075 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:47.076 { 00:04:47.076 "aliases": [ 00:04:47.076 "271b4f1d-839d-4afa-993c-b13dade51150" 00:04:47.076 ], 00:04:47.076 "assigned_rate_limits": { 00:04:47.076 "r_mbytes_per_sec": 0, 00:04:47.076 "rw_ios_per_sec": 0, 00:04:47.076 "rw_mbytes_per_sec": 0, 00:04:47.076 "w_mbytes_per_sec": 0 00:04:47.076 }, 00:04:47.076 "block_size": 512, 00:04:47.076 "claim_type": "exclusive_write", 00:04:47.076 "claimed": true, 00:04:47.076 "driver_specific": {}, 00:04:47.076 "memory_domains": [ 00:04:47.076 { 00:04:47.076 "dma_device_id": "system", 00:04:47.076 "dma_device_type": 1 00:04:47.076 }, 00:04:47.076 { 00:04:47.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.076 "dma_device_type": 2 00:04:47.076 } 00:04:47.076 ], 00:04:47.076 "name": "Malloc3", 00:04:47.076 "num_blocks": 16384, 00:04:47.076 "product_name": "Malloc disk", 00:04:47.076 "supported_io_types": { 00:04:47.076 "abort": true, 00:04:47.076 "compare": false, 00:04:47.076 "compare_and_write": false, 00:04:47.076 "copy": true, 00:04:47.076 "flush": true, 00:04:47.076 "get_zone_info": false, 00:04:47.076 "nvme_admin": false, 00:04:47.076 "nvme_io": false, 00:04:47.076 "nvme_io_md": false, 00:04:47.076 "nvme_iov_md": false, 00:04:47.076 "read": true, 00:04:47.076 "reset": true, 00:04:47.076 "seek_data": false, 00:04:47.076 "seek_hole": false, 00:04:47.076 "unmap": true, 00:04:47.076 "write": true, 00:04:47.076 "write_zeroes": true, 00:04:47.076 "zcopy": true, 00:04:47.076 "zone_append": false, 00:04:47.076 "zone_management": false 00:04:47.076 }, 00:04:47.076 "uuid": "271b4f1d-839d-4afa-993c-b13dade51150", 00:04:47.076 "zoned": false 00:04:47.076 }, 00:04:47.076 { 00:04:47.076 "aliases": [ 00:04:47.076 "14c974b8-f009-5536-8654-928537b890cb" 00:04:47.076 ], 00:04:47.076 "assigned_rate_limits": { 00:04:47.076 "r_mbytes_per_sec": 0, 00:04:47.076 "rw_ios_per_sec": 0, 00:04:47.076 "rw_mbytes_per_sec": 0, 00:04:47.076 "w_mbytes_per_sec": 0 00:04:47.076 }, 00:04:47.076 "block_size": 512, 00:04:47.076 "claimed": false, 00:04:47.076 "driver_specific": { 00:04:47.076 "passthru": { 00:04:47.076 "base_bdev_name": "Malloc3", 00:04:47.076 "name": "Passthru0" 00:04:47.076 } 00:04:47.076 }, 00:04:47.076 "memory_domains": [ 00:04:47.076 { 00:04:47.076 "dma_device_id": "system", 00:04:47.076 "dma_device_type": 1 00:04:47.076 }, 00:04:47.076 { 00:04:47.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.076 "dma_device_type": 2 00:04:47.076 } 00:04:47.076 ], 00:04:47.076 "name": "Passthru0", 00:04:47.076 "num_blocks": 16384, 00:04:47.076 "product_name": "passthru", 00:04:47.076 "supported_io_types": { 00:04:47.076 "abort": true, 00:04:47.076 "compare": false, 00:04:47.076 "compare_and_write": false, 00:04:47.076 "copy": true, 00:04:47.076 "flush": true, 00:04:47.076 "get_zone_info": false, 00:04:47.076 "nvme_admin": false, 00:04:47.076 "nvme_io": false, 00:04:47.076 "nvme_io_md": false, 00:04:47.076 "nvme_iov_md": false, 00:04:47.076 "read": true, 00:04:47.076 "reset": true, 00:04:47.076 "seek_data": false, 00:04:47.076 "seek_hole": false, 00:04:47.076 "unmap": true, 00:04:47.076 "write": true, 00:04:47.076 "write_zeroes": true, 00:04:47.076 "zcopy": true, 00:04:47.076 "zone_append": false, 00:04:47.076 "zone_management": false 00:04:47.076 }, 00:04:47.076 "uuid": "14c974b8-f009-5536-8654-928537b890cb", 00:04:47.076 "zoned": false 00:04:47.076 } 00:04:47.076 ]' 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:47.076 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:47.335 ************************************ 00:04:47.335 END TEST rpc_daemon_integrity 00:04:47.335 ************************************ 00:04:47.335 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:47.335 00:04:47.335 real 0m0.318s 00:04:47.335 user 0m0.224s 00:04:47.335 sys 0m0.031s 00:04:47.335 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.335 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.335 12:12:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:47.335 12:12:01 rpc -- rpc/rpc.sh@84 -- # killprocess 60564 00:04:47.335 12:12:01 rpc -- common/autotest_common.sh@950 -- # '[' -z 60564 ']' 00:04:47.335 12:12:01 rpc -- common/autotest_common.sh@954 -- # kill -0 60564 00:04:47.335 12:12:01 rpc -- common/autotest_common.sh@955 -- # uname 00:04:47.335 12:12:01 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.335 12:12:01 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60564 00:04:47.335 killing process with pid 60564 00:04:47.335 12:12:02 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.335 12:12:02 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.335 12:12:02 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60564' 00:04:47.335 12:12:02 rpc -- common/autotest_common.sh@969 -- # kill 60564 00:04:47.335 12:12:02 rpc -- common/autotest_common.sh@974 -- # wait 60564 00:04:47.594 00:04:47.594 real 0m3.088s 00:04:47.594 user 0m4.120s 00:04:47.594 sys 0m0.704s 00:04:47.594 12:12:02 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.594 ************************************ 00:04:47.594 END TEST rpc 00:04:47.594 ************************************ 00:04:47.594 12:12:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 12:12:02 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:47.594 12:12:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.594 12:12:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.594 12:12:02 -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 ************************************ 00:04:47.594 START TEST skip_rpc 00:04:47.594 ************************************ 00:04:47.594 12:12:02 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:47.852 * Looking for test storage... 00:04:47.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.852 12:12:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.852 12:12:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:47.852 12:12:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:47.852 12:12:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.852 12:12:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.852 12:12:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.852 ************************************ 00:04:47.852 START TEST skip_rpc 00:04:47.852 ************************************ 00:04:47.852 12:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:47.852 12:12:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60825 00:04:47.852 12:12:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.852 12:12:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:47.852 12:12:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:47.852 [2024-07-25 12:12:02.600177] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:04:47.852 [2024-07-25 12:12:02.600271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60825 ] 00:04:48.111 [2024-07-25 12:12:02.733603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.111 [2024-07-25 12:12:02.865641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.378 2024/07/25 12:12:07 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60825 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 60825 ']' 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 60825 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60825 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.378 killing process with pid 60825 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60825' 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 60825 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 60825 00:04:53.378 00:04:53.378 real 0m5.431s 00:04:53.378 user 0m5.054s 00:04:53.378 sys 0m0.271s 00:04:53.378 ************************************ 00:04:53.378 END TEST skip_rpc 00:04:53.378 ************************************ 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.378 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.379 12:12:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:53.379 12:12:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.379 12:12:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.379 12:12:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.379 ************************************ 00:04:53.379 START TEST skip_rpc_with_json 00:04:53.379 ************************************ 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60912 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60912 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 60912 ']' 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.379 12:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.379 [2024-07-25 12:12:08.085569] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:04:53.379 [2024-07-25 12:12:08.085684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60912 ] 00:04:53.379 [2024-07-25 12:12:08.223124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.636 [2024-07-25 12:12:08.344119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.583 [2024-07-25 12:12:09.114257] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:54.583 2024/07/25 12:12:09 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:54.583 request: 00:04:54.583 { 00:04:54.583 "method": "nvmf_get_transports", 00:04:54.583 "params": { 00:04:54.583 "trtype": "tcp" 00:04:54.583 } 00:04:54.583 } 00:04:54.583 Got JSON-RPC error response 00:04:54.583 GoRPCClient: error on JSON-RPC call 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.583 [2024-07-25 12:12:09.126365] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.583 12:12:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.583 { 00:04:54.583 "subsystems": [ 00:04:54.583 { 00:04:54.583 "subsystem": "keyring", 00:04:54.583 "config": [] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "iobuf", 00:04:54.583 "config": [ 00:04:54.583 { 00:04:54.583 "method": "iobuf_set_options", 00:04:54.583 "params": { 00:04:54.583 "large_bufsize": 135168, 00:04:54.583 "large_pool_count": 1024, 00:04:54.583 "small_bufsize": 8192, 00:04:54.583 "small_pool_count": 8192 00:04:54.583 } 00:04:54.583 } 00:04:54.583 ] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "sock", 00:04:54.583 "config": [ 00:04:54.583 { 00:04:54.583 "method": "sock_set_default_impl", 00:04:54.583 "params": { 00:04:54.583 "impl_name": "posix" 00:04:54.583 } 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "method": "sock_impl_set_options", 00:04:54.583 "params": { 00:04:54.583 "enable_ktls": false, 00:04:54.583 "enable_placement_id": 0, 00:04:54.583 "enable_quickack": false, 00:04:54.583 "enable_recv_pipe": true, 00:04:54.583 "enable_zerocopy_send_client": false, 00:04:54.583 "enable_zerocopy_send_server": true, 00:04:54.583 "impl_name": "ssl", 00:04:54.583 "recv_buf_size": 4096, 00:04:54.583 "send_buf_size": 4096, 00:04:54.583 "tls_version": 0, 00:04:54.583 "zerocopy_threshold": 0 00:04:54.583 } 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "method": "sock_impl_set_options", 00:04:54.583 "params": { 00:04:54.583 "enable_ktls": false, 00:04:54.583 "enable_placement_id": 0, 00:04:54.583 "enable_quickack": false, 00:04:54.583 "enable_recv_pipe": true, 00:04:54.583 "enable_zerocopy_send_client": false, 00:04:54.583 "enable_zerocopy_send_server": true, 00:04:54.583 "impl_name": "posix", 00:04:54.583 "recv_buf_size": 2097152, 00:04:54.583 "send_buf_size": 2097152, 00:04:54.583 "tls_version": 0, 00:04:54.583 "zerocopy_threshold": 0 00:04:54.583 } 00:04:54.583 } 00:04:54.583 ] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "vmd", 00:04:54.583 "config": [] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "accel", 00:04:54.583 "config": [ 00:04:54.583 { 00:04:54.583 "method": "accel_set_options", 00:04:54.583 "params": { 00:04:54.583 "buf_count": 2048, 00:04:54.583 "large_cache_size": 16, 00:04:54.583 "sequence_count": 2048, 00:04:54.583 "small_cache_size": 128, 00:04:54.583 "task_count": 2048 00:04:54.583 } 00:04:54.583 } 00:04:54.583 ] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "bdev", 00:04:54.583 "config": [ 00:04:54.583 { 00:04:54.583 "method": "bdev_set_options", 00:04:54.583 "params": { 00:04:54.583 "bdev_auto_examine": true, 00:04:54.583 "bdev_io_cache_size": 256, 00:04:54.583 "bdev_io_pool_size": 65535, 00:04:54.583 "iobuf_large_cache_size": 16, 00:04:54.583 "iobuf_small_cache_size": 128 00:04:54.583 } 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "method": "bdev_raid_set_options", 00:04:54.583 "params": { 00:04:54.583 "process_max_bandwidth_mb_sec": 0, 00:04:54.583 "process_window_size_kb": 1024 00:04:54.583 } 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "method": "bdev_iscsi_set_options", 00:04:54.583 "params": { 00:04:54.583 "timeout_sec": 30 00:04:54.583 } 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "method": "bdev_nvme_set_options", 00:04:54.583 "params": { 00:04:54.583 "action_on_timeout": "none", 00:04:54.583 "allow_accel_sequence": false, 00:04:54.583 "arbitration_burst": 0, 00:04:54.583 "bdev_retry_count": 3, 00:04:54.583 "ctrlr_loss_timeout_sec": 0, 00:04:54.583 "delay_cmd_submit": true, 00:04:54.583 "dhchap_dhgroups": [ 00:04:54.583 "null", 00:04:54.583 "ffdhe2048", 00:04:54.583 "ffdhe3072", 00:04:54.583 "ffdhe4096", 00:04:54.583 "ffdhe6144", 00:04:54.583 "ffdhe8192" 00:04:54.583 ], 00:04:54.583 "dhchap_digests": [ 00:04:54.583 "sha256", 00:04:54.583 "sha384", 00:04:54.583 "sha512" 00:04:54.583 ], 00:04:54.583 "disable_auto_failback": false, 00:04:54.583 "fast_io_fail_timeout_sec": 0, 00:04:54.583 "generate_uuids": false, 00:04:54.583 "high_priority_weight": 0, 00:04:54.583 "io_path_stat": false, 00:04:54.583 "io_queue_requests": 0, 00:04:54.583 "keep_alive_timeout_ms": 10000, 00:04:54.583 "low_priority_weight": 0, 00:04:54.583 "medium_priority_weight": 0, 00:04:54.583 "nvme_adminq_poll_period_us": 10000, 00:04:54.583 "nvme_error_stat": false, 00:04:54.583 "nvme_ioq_poll_period_us": 0, 00:04:54.583 "rdma_cm_event_timeout_ms": 0, 00:04:54.583 "rdma_max_cq_size": 0, 00:04:54.583 "rdma_srq_size": 0, 00:04:54.583 "reconnect_delay_sec": 0, 00:04:54.583 "timeout_admin_us": 0, 00:04:54.583 "timeout_us": 0, 00:04:54.583 "transport_ack_timeout": 0, 00:04:54.583 "transport_retry_count": 4, 00:04:54.583 "transport_tos": 0 00:04:54.583 } 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "method": "bdev_nvme_set_hotplug", 00:04:54.583 "params": { 00:04:54.583 "enable": false, 00:04:54.583 "period_us": 100000 00:04:54.583 } 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "method": "bdev_wait_for_examine" 00:04:54.583 } 00:04:54.583 ] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "scsi", 00:04:54.583 "config": null 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "scheduler", 00:04:54.583 "config": [ 00:04:54.583 { 00:04:54.583 "method": "framework_set_scheduler", 00:04:54.583 "params": { 00:04:54.583 "name": "static" 00:04:54.583 } 00:04:54.583 } 00:04:54.583 ] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "vhost_scsi", 00:04:54.583 "config": [] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "vhost_blk", 00:04:54.583 "config": [] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "ublk", 00:04:54.583 "config": [] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "nbd", 00:04:54.583 "config": [] 00:04:54.583 }, 00:04:54.583 { 00:04:54.583 "subsystem": "nvmf", 00:04:54.583 "config": [ 00:04:54.583 { 00:04:54.583 "method": "nvmf_set_config", 00:04:54.583 "params": { 00:04:54.584 "admin_cmd_passthru": { 00:04:54.584 "identify_ctrlr": false 00:04:54.584 }, 00:04:54.584 "discovery_filter": "match_any" 00:04:54.584 } 00:04:54.584 }, 00:04:54.584 { 00:04:54.584 "method": "nvmf_set_max_subsystems", 00:04:54.584 "params": { 00:04:54.584 "max_subsystems": 1024 00:04:54.584 } 00:04:54.584 }, 00:04:54.584 { 00:04:54.584 "method": "nvmf_set_crdt", 00:04:54.584 "params": { 00:04:54.584 "crdt1": 0, 00:04:54.584 "crdt2": 0, 00:04:54.584 "crdt3": 0 00:04:54.584 } 00:04:54.584 }, 00:04:54.584 { 00:04:54.584 "method": "nvmf_create_transport", 00:04:54.584 "params": { 00:04:54.584 "abort_timeout_sec": 1, 00:04:54.584 "ack_timeout": 0, 00:04:54.584 "buf_cache_size": 4294967295, 00:04:54.584 "c2h_success": true, 00:04:54.584 "data_wr_pool_size": 0, 00:04:54.584 "dif_insert_or_strip": false, 00:04:54.584 "in_capsule_data_size": 4096, 00:04:54.584 "io_unit_size": 131072, 00:04:54.584 "max_aq_depth": 128, 00:04:54.584 "max_io_qpairs_per_ctrlr": 127, 00:04:54.584 "max_io_size": 131072, 00:04:54.584 "max_queue_depth": 128, 00:04:54.584 "num_shared_buffers": 511, 00:04:54.584 "sock_priority": 0, 00:04:54.584 "trtype": "TCP", 00:04:54.584 "zcopy": false 00:04:54.584 } 00:04:54.584 } 00:04:54.584 ] 00:04:54.584 }, 00:04:54.584 { 00:04:54.584 "subsystem": "iscsi", 00:04:54.584 "config": [ 00:04:54.584 { 00:04:54.584 "method": "iscsi_set_options", 00:04:54.584 "params": { 00:04:54.584 "allow_duplicated_isid": false, 00:04:54.584 "chap_group": 0, 00:04:54.584 "data_out_pool_size": 2048, 00:04:54.584 "default_time2retain": 20, 00:04:54.584 "default_time2wait": 2, 00:04:54.584 "disable_chap": false, 00:04:54.584 "error_recovery_level": 0, 00:04:54.584 "first_burst_length": 8192, 00:04:54.584 "immediate_data": true, 00:04:54.584 "immediate_data_pool_size": 16384, 00:04:54.584 "max_connections_per_session": 2, 00:04:54.584 "max_large_datain_per_connection": 64, 00:04:54.584 "max_queue_depth": 64, 00:04:54.584 "max_r2t_per_connection": 4, 00:04:54.584 "max_sessions": 128, 00:04:54.584 "mutual_chap": false, 00:04:54.584 "node_base": "iqn.2016-06.io.spdk", 00:04:54.584 "nop_in_interval": 30, 00:04:54.584 "nop_timeout": 60, 00:04:54.584 "pdu_pool_size": 36864, 00:04:54.584 "require_chap": false 00:04:54.584 } 00:04:54.584 } 00:04:54.584 ] 00:04:54.584 } 00:04:54.584 ] 00:04:54.584 } 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60912 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 60912 ']' 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 60912 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60912 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.584 killing process with pid 60912 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60912' 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 60912 00:04:54.584 12:12:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 60912 00:04:55.151 12:12:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60957 00:04:55.151 12:12:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:55.151 12:12:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60957 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 60957 ']' 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 60957 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60957 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.415 killing process with pid 60957 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60957' 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 60957 00:05:00.415 12:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 60957 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.415 00:05:00.415 real 0m7.154s 00:05:00.415 user 0m6.906s 00:05:00.415 sys 0m0.674s 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.415 ************************************ 00:05:00.415 END TEST skip_rpc_with_json 00:05:00.415 ************************************ 00:05:00.415 12:12:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:00.415 12:12:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.415 12:12:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.415 12:12:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.415 ************************************ 00:05:00.415 START TEST skip_rpc_with_delay 00:05:00.415 ************************************ 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:00.415 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.674 [2024-07-25 12:12:15.295927] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:00.674 [2024-07-25 12:12:15.296799] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:00.674 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:00.674 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.674 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:00.674 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.674 00:05:00.674 real 0m0.095s 00:05:00.674 user 0m0.057s 00:05:00.674 sys 0m0.034s 00:05:00.674 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.674 12:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:00.674 ************************************ 00:05:00.675 END TEST skip_rpc_with_delay 00:05:00.675 ************************************ 00:05:00.675 12:12:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:00.675 12:12:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:00.675 12:12:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:00.675 12:12:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.675 12:12:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.675 12:12:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.675 ************************************ 00:05:00.675 START TEST exit_on_failed_rpc_init 00:05:00.675 ************************************ 00:05:00.675 12:12:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:00.675 12:12:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61061 00:05:00.675 12:12:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.675 12:12:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61061 00:05:00.675 12:12:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 61061 ']' 00:05:00.675 12:12:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.675 12:12:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.675 12:12:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.675 12:12:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.675 12:12:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.675 [2024-07-25 12:12:15.451926] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:00.675 [2024-07-25 12:12:15.452084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61061 ] 00:05:00.933 [2024-07-25 12:12:15.592166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.933 [2024-07-25 12:12:15.708132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:01.867 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.867 [2024-07-25 12:12:16.516727] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:01.867 [2024-07-25 12:12:16.516844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61091 ] 00:05:01.867 [2024-07-25 12:12:16.677331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.125 [2024-07-25 12:12:16.803128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.125 [2024-07-25 12:12:16.803236] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:02.125 [2024-07-25 12:12:16.803252] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:02.125 [2024-07-25 12:12:16.803261] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61061 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 61061 ']' 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 61061 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61061 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.125 killing process with pid 61061 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61061' 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 61061 00:05:02.125 12:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 61061 00:05:02.690 00:05:02.690 real 0m1.970s 00:05:02.690 user 0m2.329s 00:05:02.690 sys 0m0.456s 00:05:02.690 12:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.690 12:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.690 ************************************ 00:05:02.690 END TEST exit_on_failed_rpc_init 00:05:02.690 ************************************ 00:05:02.690 12:12:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.690 00:05:02.690 real 0m14.935s 00:05:02.690 user 0m14.453s 00:05:02.690 sys 0m1.592s 00:05:02.690 12:12:17 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.690 12:12:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.690 ************************************ 00:05:02.690 END TEST skip_rpc 00:05:02.690 ************************************ 00:05:02.690 12:12:17 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.690 12:12:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.690 12:12:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.690 12:12:17 -- common/autotest_common.sh@10 -- # set +x 00:05:02.690 ************************************ 00:05:02.690 START TEST rpc_client 00:05:02.690 ************************************ 00:05:02.690 12:12:17 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.690 * Looking for test storage... 00:05:02.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:02.690 12:12:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:02.690 OK 00:05:02.690 12:12:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:02.690 00:05:02.690 real 0m0.086s 00:05:02.690 user 0m0.037s 00:05:02.690 sys 0m0.054s 00:05:02.690 12:12:17 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.690 ************************************ 00:05:02.690 12:12:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:02.690 END TEST rpc_client 00:05:02.690 ************************************ 00:05:02.690 12:12:17 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.690 12:12:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.690 12:12:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.690 12:12:17 -- common/autotest_common.sh@10 -- # set +x 00:05:02.690 ************************************ 00:05:02.690 START TEST json_config 00:05:02.690 ************************************ 00:05:02.690 12:12:17 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.947 12:12:17 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.947 12:12:17 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.947 12:12:17 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.947 12:12:17 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.947 12:12:17 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.947 12:12:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.947 12:12:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.948 12:12:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.948 12:12:17 json_config -- paths/export.sh@5 -- # export PATH 00:05:02.948 12:12:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.948 12:12:17 json_config -- nvmf/common.sh@47 -- # : 0 00:05:02.948 12:12:17 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:02.948 12:12:17 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:02.948 12:12:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.948 12:12:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.948 12:12:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.948 12:12:17 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:02.948 12:12:17 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:02.948 12:12:17 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:02.948 INFO: JSON configuration test init 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:02.948 12:12:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.948 12:12:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:02.948 12:12:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.948 12:12:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.948 Waiting for target to run... 00:05:02.948 12:12:17 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:02.948 12:12:17 json_config -- json_config/common.sh@9 -- # local app=target 00:05:02.948 12:12:17 json_config -- json_config/common.sh@10 -- # shift 00:05:02.948 12:12:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.948 12:12:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.948 12:12:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.948 12:12:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.948 12:12:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.948 12:12:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61219 00:05:02.948 12:12:17 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:02.948 12:12:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.948 12:12:17 json_config -- json_config/common.sh@25 -- # waitforlisten 61219 /var/tmp/spdk_tgt.sock 00:05:02.948 12:12:17 json_config -- common/autotest_common.sh@831 -- # '[' -z 61219 ']' 00:05:02.948 12:12:17 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.948 12:12:17 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.948 12:12:17 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.948 12:12:17 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.948 12:12:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.948 [2024-07-25 12:12:17.692398] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:02.948 [2024-07-25 12:12:17.692498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61219 ] 00:05:03.515 [2024-07-25 12:12:18.128316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.515 [2024-07-25 12:12:18.222335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.082 00:05:04.082 12:12:18 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.082 12:12:18 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:04.082 12:12:18 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.082 12:12:18 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:04.082 12:12:18 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:04.082 12:12:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.082 12:12:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.082 12:12:18 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:04.082 12:12:18 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:04.082 12:12:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.082 12:12:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.082 12:12:18 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:04.082 12:12:18 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:04.082 12:12:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:04.650 12:12:19 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:04.650 12:12:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:04.650 12:12:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.650 12:12:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.650 12:12:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:04.650 12:12:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:04.650 12:12:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:04.650 12:12:19 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:04.650 12:12:19 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:04.650 12:12:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@51 -- # sort 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:04.909 12:12:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.909 12:12:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:04.909 12:12:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.909 12:12:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:04.909 12:12:19 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:04.909 12:12:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.167 MallocForNvmf0 00:05:05.167 12:12:19 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.167 12:12:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.425 MallocForNvmf1 00:05:05.425 12:12:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.425 12:12:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.682 [2024-07-25 12:12:20.323510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.682 12:12:20 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.682 12:12:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.938 12:12:20 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.938 12:12:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.938 12:12:20 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:05.938 12:12:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.196 12:12:21 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.196 12:12:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.513 [2024-07-25 12:12:21.236053] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.513 12:12:21 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:06.513 12:12:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.513 12:12:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.513 12:12:21 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:06.513 12:12:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.513 12:12:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.513 12:12:21 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:06.513 12:12:21 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.513 12:12:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.771 MallocBdevForConfigChangeCheck 00:05:06.771 12:12:21 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:06.771 12:12:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.771 12:12:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.028 12:12:21 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:07.028 12:12:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.286 INFO: shutting down applications... 00:05:07.286 12:12:22 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:07.286 12:12:22 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:07.286 12:12:22 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:07.286 12:12:22 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:07.286 12:12:22 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:07.544 Calling clear_iscsi_subsystem 00:05:07.544 Calling clear_nvmf_subsystem 00:05:07.544 Calling clear_nbd_subsystem 00:05:07.544 Calling clear_ublk_subsystem 00:05:07.544 Calling clear_vhost_blk_subsystem 00:05:07.544 Calling clear_vhost_scsi_subsystem 00:05:07.544 Calling clear_bdev_subsystem 00:05:07.544 12:12:22 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:07.544 12:12:22 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:07.544 12:12:22 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:07.544 12:12:22 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.544 12:12:22 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:07.544 12:12:22 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:08.108 12:12:22 json_config -- json_config/json_config.sh@349 -- # break 00:05:08.108 12:12:22 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:08.108 12:12:22 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:08.108 12:12:22 json_config -- json_config/common.sh@31 -- # local app=target 00:05:08.108 12:12:22 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.108 12:12:22 json_config -- json_config/common.sh@35 -- # [[ -n 61219 ]] 00:05:08.108 12:12:22 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61219 00:05:08.108 12:12:22 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.108 12:12:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.108 12:12:22 json_config -- json_config/common.sh@41 -- # kill -0 61219 00:05:08.108 12:12:22 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.673 12:12:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.673 12:12:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.673 12:12:23 json_config -- json_config/common.sh@41 -- # kill -0 61219 00:05:08.673 12:12:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.673 12:12:23 json_config -- json_config/common.sh@43 -- # break 00:05:08.673 12:12:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.673 SPDK target shutdown done 00:05:08.673 12:12:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.673 INFO: relaunching applications... 00:05:08.673 12:12:23 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:08.673 12:12:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:08.673 12:12:23 json_config -- json_config/common.sh@9 -- # local app=target 00:05:08.673 12:12:23 json_config -- json_config/common.sh@10 -- # shift 00:05:08.673 12:12:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.673 12:12:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.673 12:12:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.673 12:12:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.673 12:12:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.673 12:12:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61494 00:05:08.673 Waiting for target to run... 00:05:08.673 12:12:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.673 12:12:23 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:08.673 12:12:23 json_config -- json_config/common.sh@25 -- # waitforlisten 61494 /var/tmp/spdk_tgt.sock 00:05:08.673 12:12:23 json_config -- common/autotest_common.sh@831 -- # '[' -z 61494 ']' 00:05:08.673 12:12:23 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.673 12:12:23 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.673 12:12:23 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.673 12:12:23 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.673 12:12:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.673 [2024-07-25 12:12:23.426703] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:08.673 [2024-07-25 12:12:23.426814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61494 ] 00:05:09.237 [2024-07-25 12:12:23.845521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.237 [2024-07-25 12:12:23.936227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.494 [2024-07-25 12:12:24.264161] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.494 [2024-07-25 12:12:24.296234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.752 12:12:24 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.752 00:05:09.752 12:12:24 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:09.752 12:12:24 json_config -- json_config/common.sh@26 -- # echo '' 00:05:09.752 12:12:24 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:09.752 INFO: Checking if target configuration is the same... 00:05:09.752 12:12:24 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:09.752 12:12:24 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.752 12:12:24 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:09.752 12:12:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.752 + '[' 2 -ne 2 ']' 00:05:09.752 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:09.752 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:09.752 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:09.752 +++ basename /dev/fd/62 00:05:09.752 ++ mktemp /tmp/62.XXX 00:05:09.752 + tmp_file_1=/tmp/62.wiY 00:05:09.752 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.752 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.752 + tmp_file_2=/tmp/spdk_tgt_config.json.bel 00:05:09.752 + ret=0 00:05:09.752 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:10.009 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:10.009 + diff -u /tmp/62.wiY /tmp/spdk_tgt_config.json.bel 00:05:10.009 INFO: JSON config files are the same 00:05:10.009 + echo 'INFO: JSON config files are the same' 00:05:10.009 + rm /tmp/62.wiY /tmp/spdk_tgt_config.json.bel 00:05:10.009 + exit 0 00:05:10.009 12:12:24 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:10.009 INFO: changing configuration and checking if this can be detected... 00:05:10.009 12:12:24 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:10.009 12:12:24 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.009 12:12:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.266 12:12:25 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.266 12:12:25 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:10.266 12:12:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.266 + '[' 2 -ne 2 ']' 00:05:10.266 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:10.266 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:10.266 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:10.266 +++ basename /dev/fd/62 00:05:10.266 ++ mktemp /tmp/62.XXX 00:05:10.266 + tmp_file_1=/tmp/62.gDt 00:05:10.266 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.266 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.524 + tmp_file_2=/tmp/spdk_tgt_config.json.QY5 00:05:10.524 + ret=0 00:05:10.524 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:10.782 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:10.782 + diff -u /tmp/62.gDt /tmp/spdk_tgt_config.json.QY5 00:05:10.782 + ret=1 00:05:10.782 + echo '=== Start of file: /tmp/62.gDt ===' 00:05:10.782 + cat /tmp/62.gDt 00:05:10.782 + echo '=== End of file: /tmp/62.gDt ===' 00:05:10.782 + echo '' 00:05:10.782 + echo '=== Start of file: /tmp/spdk_tgt_config.json.QY5 ===' 00:05:10.782 + cat /tmp/spdk_tgt_config.json.QY5 00:05:10.782 + echo '=== End of file: /tmp/spdk_tgt_config.json.QY5 ===' 00:05:10.782 + echo '' 00:05:10.783 + rm /tmp/62.gDt /tmp/spdk_tgt_config.json.QY5 00:05:10.783 + exit 1 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:10.783 INFO: configuration change detected. 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@321 -- # [[ -n 61494 ]] 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.783 12:12:25 json_config -- json_config/json_config.sh@327 -- # killprocess 61494 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@950 -- # '[' -z 61494 ']' 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@954 -- # kill -0 61494 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@955 -- # uname 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61494 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.783 killing process with pid 61494 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61494' 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@969 -- # kill 61494 00:05:10.783 12:12:25 json_config -- common/autotest_common.sh@974 -- # wait 61494 00:05:11.348 12:12:25 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.348 12:12:25 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:11.348 12:12:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.348 12:12:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.348 12:12:25 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:11.348 INFO: Success 00:05:11.348 12:12:25 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:11.348 00:05:11.348 real 0m8.427s 00:05:11.348 user 0m12.120s 00:05:11.348 sys 0m1.834s 00:05:11.348 12:12:25 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.348 ************************************ 00:05:11.348 END TEST json_config 00:05:11.348 ************************************ 00:05:11.348 12:12:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.348 12:12:26 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:11.348 12:12:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.348 12:12:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.348 12:12:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.348 ************************************ 00:05:11.348 START TEST json_config_extra_key 00:05:11.348 ************************************ 00:05:11.348 12:12:26 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.348 12:12:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.348 12:12:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.348 12:12:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.348 12:12:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.348 12:12:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.348 12:12:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.348 12:12:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:11.348 12:12:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:11.348 12:12:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:11.348 INFO: launching applications... 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:11.348 12:12:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61671 00:05:11.348 Waiting for target to run... 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61671 /var/tmp/spdk_tgt.sock 00:05:11.348 12:12:26 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 61671 ']' 00:05:11.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.348 12:12:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:11.348 12:12:26 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.348 12:12:26 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.348 12:12:26 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.348 12:12:26 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.348 12:12:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.348 [2024-07-25 12:12:26.147467] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:11.348 [2024-07-25 12:12:26.147553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61671 ] 00:05:11.916 [2024-07-25 12:12:26.569339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.916 [2024-07-25 12:12:26.657145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.480 12:12:27 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.480 00:05:12.480 12:12:27 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:12.480 12:12:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:12.480 12:12:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:12.480 INFO: shutting down applications... 00:05:12.480 12:12:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:12.480 12:12:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:12.480 12:12:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.480 12:12:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61671 ]] 00:05:12.480 12:12:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61671 00:05:12.480 12:12:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.480 12:12:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.480 12:12:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61671 00:05:12.480 12:12:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.738 12:12:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.738 12:12:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.738 12:12:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61671 00:05:12.738 12:12:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.738 12:12:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:12.738 12:12:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.738 SPDK target shutdown done 00:05:12.738 12:12:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.738 Success 00:05:12.738 12:12:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:12.738 00:05:12.738 real 0m1.562s 00:05:12.738 user 0m1.429s 00:05:12.738 sys 0m0.426s 00:05:12.738 12:12:27 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.738 ************************************ 00:05:12.738 12:12:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.738 END TEST json_config_extra_key 00:05:12.738 ************************************ 00:05:12.996 12:12:27 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.996 12:12:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.996 12:12:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.996 12:12:27 -- common/autotest_common.sh@10 -- # set +x 00:05:12.996 ************************************ 00:05:12.996 START TEST alias_rpc 00:05:12.996 ************************************ 00:05:12.996 12:12:27 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.996 * Looking for test storage... 00:05:12.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:12.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.996 12:12:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:12.996 12:12:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61742 00:05:12.996 12:12:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.996 12:12:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61742 00:05:12.996 12:12:27 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 61742 ']' 00:05:12.996 12:12:27 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.996 12:12:27 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.996 12:12:27 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.996 12:12:27 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.996 12:12:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.996 [2024-07-25 12:12:27.775387] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:12.996 [2024-07-25 12:12:27.776371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61742 ] 00:05:13.254 [2024-07-25 12:12:27.915045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.254 [2024-07-25 12:12:28.031877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.190 12:12:28 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.190 12:12:28 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:14.190 12:12:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:14.450 12:12:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61742 00:05:14.450 12:12:29 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 61742 ']' 00:05:14.451 12:12:29 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 61742 00:05:14.451 12:12:29 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:14.451 12:12:29 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.451 12:12:29 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61742 00:05:14.451 killing process with pid 61742 00:05:14.451 12:12:29 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.451 12:12:29 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.451 12:12:29 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61742' 00:05:14.451 12:12:29 alias_rpc -- common/autotest_common.sh@969 -- # kill 61742 00:05:14.451 12:12:29 alias_rpc -- common/autotest_common.sh@974 -- # wait 61742 00:05:14.709 ************************************ 00:05:14.709 END TEST alias_rpc 00:05:14.709 ************************************ 00:05:14.709 00:05:14.709 real 0m1.886s 00:05:14.709 user 0m2.175s 00:05:14.709 sys 0m0.456s 00:05:14.709 12:12:29 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.709 12:12:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.709 12:12:29 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:05:14.709 12:12:29 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.709 12:12:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.709 12:12:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.709 12:12:29 -- common/autotest_common.sh@10 -- # set +x 00:05:14.709 ************************************ 00:05:14.709 START TEST dpdk_mem_utility 00:05:14.709 ************************************ 00:05:14.709 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.968 * Looking for test storage... 00:05:14.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:14.968 12:12:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:14.968 12:12:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61834 00:05:14.968 12:12:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.968 12:12:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61834 00:05:14.968 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 61834 ']' 00:05:14.968 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.968 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.968 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.968 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.968 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.968 [2024-07-25 12:12:29.711530] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:14.968 [2024-07-25 12:12:29.712574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61834 ] 00:05:15.227 [2024-07-25 12:12:29.854165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.227 [2024-07-25 12:12:29.966243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.163 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.163 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:16.163 12:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:16.163 12:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:16.163 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.163 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.163 { 00:05:16.163 "filename": "/tmp/spdk_mem_dump.txt" 00:05:16.163 } 00:05:16.163 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.163 12:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.163 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:16.163 1 heaps totaling size 814.000000 MiB 00:05:16.163 size: 814.000000 MiB heap id: 0 00:05:16.163 end heaps---------- 00:05:16.163 8 mempools totaling size 598.116089 MiB 00:05:16.163 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:16.163 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:16.163 size: 84.521057 MiB name: bdev_io_61834 00:05:16.163 size: 51.011292 MiB name: evtpool_61834 00:05:16.163 size: 50.003479 MiB name: msgpool_61834 00:05:16.163 size: 21.763794 MiB name: PDU_Pool 00:05:16.163 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:16.163 size: 0.026123 MiB name: Session_Pool 00:05:16.163 end mempools------- 00:05:16.163 6 memzones totaling size 4.142822 MiB 00:05:16.163 size: 1.000366 MiB name: RG_ring_0_61834 00:05:16.163 size: 1.000366 MiB name: RG_ring_1_61834 00:05:16.163 size: 1.000366 MiB name: RG_ring_4_61834 00:05:16.163 size: 1.000366 MiB name: RG_ring_5_61834 00:05:16.163 size: 0.125366 MiB name: RG_ring_2_61834 00:05:16.163 size: 0.015991 MiB name: RG_ring_3_61834 00:05:16.163 end memzones------- 00:05:16.163 12:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:16.163 heap id: 0 total size: 814.000000 MiB number of busy elements: 229 number of free elements: 15 00:05:16.163 list of free elements. size: 12.484924 MiB 00:05:16.163 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:16.163 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:16.163 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:16.163 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:16.163 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:16.163 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:16.163 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:16.163 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:16.163 element at address: 0x200000200000 with size: 0.836853 MiB 00:05:16.163 element at address: 0x20001aa00000 with size: 0.571533 MiB 00:05:16.163 element at address: 0x20000b200000 with size: 0.489441 MiB 00:05:16.163 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:16.163 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:16.163 element at address: 0x200027e00000 with size: 0.397949 MiB 00:05:16.163 element at address: 0x200003a00000 with size: 0.351501 MiB 00:05:16.163 list of standard malloc elements. size: 199.252502 MiB 00:05:16.163 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:16.163 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:16.163 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:16.163 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:16.163 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:16.163 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:16.163 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:16.163 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:16.163 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:16.163 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:16.163 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:16.163 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:16.163 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:16.163 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:16.163 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:16.164 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:16.164 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:16.165 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:16.165 list of memzone associated elements. size: 602.262573 MiB 00:05:16.165 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:16.165 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:16.165 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:16.165 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:16.165 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:16.165 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61834_0 00:05:16.165 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:16.165 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61834_0 00:05:16.165 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:16.165 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61834_0 00:05:16.165 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:16.165 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:16.165 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:16.165 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:16.165 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:16.165 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61834 00:05:16.165 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:16.165 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61834 00:05:16.165 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:16.165 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61834 00:05:16.165 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:16.165 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:16.165 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:16.165 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:16.165 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:16.165 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:16.165 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:16.165 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:16.165 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:16.165 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61834 00:05:16.165 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:16.165 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61834 00:05:16.165 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:16.165 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61834 00:05:16.165 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:16.165 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61834 00:05:16.165 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:16.165 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61834 00:05:16.165 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:16.165 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:16.165 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:16.165 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:16.165 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:16.165 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:16.165 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:16.165 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61834 00:05:16.165 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:16.165 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:16.165 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:05:16.165 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:16.165 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:16.165 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61834 00:05:16.165 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:05:16.165 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:16.165 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:16.165 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61834 00:05:16.165 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:16.165 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61834 00:05:16.165 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:05:16.165 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:16.165 12:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:16.165 12:12:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61834 00:05:16.165 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 61834 ']' 00:05:16.165 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 61834 00:05:16.165 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:16.165 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.166 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61834 00:05:16.166 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.166 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.166 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61834' 00:05:16.166 killing process with pid 61834 00:05:16.166 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 61834 00:05:16.166 12:12:30 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 61834 00:05:16.423 00:05:16.423 real 0m1.676s 00:05:16.423 user 0m1.785s 00:05:16.423 sys 0m0.431s 00:05:16.423 12:12:31 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.423 ************************************ 00:05:16.423 END TEST dpdk_mem_utility 00:05:16.423 ************************************ 00:05:16.423 12:12:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.423 12:12:31 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:16.423 12:12:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.423 12:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.423 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:05:16.423 ************************************ 00:05:16.423 START TEST event 00:05:16.423 ************************************ 00:05:16.423 12:12:31 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:16.682 * Looking for test storage... 00:05:16.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:16.682 12:12:31 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:16.682 12:12:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:16.682 12:12:31 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.682 12:12:31 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:16.682 12:12:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.682 12:12:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.682 ************************************ 00:05:16.682 START TEST event_perf 00:05:16.682 ************************************ 00:05:16.682 12:12:31 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.682 Running I/O for 1 seconds...[2024-07-25 12:12:31.391533] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:16.682 [2024-07-25 12:12:31.391636] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61929 ] 00:05:16.682 [2024-07-25 12:12:31.528701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.939 [2024-07-25 12:12:31.634921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.939 [2024-07-25 12:12:31.635056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.940 [2024-07-25 12:12:31.635173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.940 [2024-07-25 12:12:31.635175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.895 Running I/O for 1 seconds... 00:05:17.895 lcore 0: 201559 00:05:17.895 lcore 1: 201558 00:05:17.895 lcore 2: 201562 00:05:17.895 lcore 3: 201561 00:05:17.895 done. 00:05:17.895 00:05:17.895 real 0m1.354s 00:05:17.895 user 0m4.162s 00:05:17.895 sys 0m0.070s 00:05:17.895 12:12:32 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.895 12:12:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.895 ************************************ 00:05:17.895 END TEST event_perf 00:05:17.895 ************************************ 00:05:18.153 12:12:32 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:18.153 12:12:32 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:18.153 12:12:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.153 12:12:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.153 ************************************ 00:05:18.153 START TEST event_reactor 00:05:18.153 ************************************ 00:05:18.153 12:12:32 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:18.153 [2024-07-25 12:12:32.790812] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:18.153 [2024-07-25 12:12:32.791051] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61962 ] 00:05:18.153 [2024-07-25 12:12:32.926670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.412 [2024-07-25 12:12:33.040032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.348 test_start 00:05:19.348 oneshot 00:05:19.348 tick 100 00:05:19.348 tick 100 00:05:19.348 tick 250 00:05:19.348 tick 100 00:05:19.348 tick 100 00:05:19.348 tick 250 00:05:19.348 tick 100 00:05:19.348 tick 500 00:05:19.348 tick 100 00:05:19.348 tick 100 00:05:19.348 tick 250 00:05:19.348 tick 100 00:05:19.348 tick 100 00:05:19.348 test_end 00:05:19.348 00:05:19.348 real 0m1.352s 00:05:19.348 user 0m1.193s 00:05:19.348 sys 0m0.053s 00:05:19.348 12:12:34 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.348 12:12:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:19.348 ************************************ 00:05:19.348 END TEST event_reactor 00:05:19.348 ************************************ 00:05:19.348 12:12:34 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.348 12:12:34 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:19.348 12:12:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.348 12:12:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.348 ************************************ 00:05:19.348 START TEST event_reactor_perf 00:05:19.348 ************************************ 00:05:19.348 12:12:34 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.348 [2024-07-25 12:12:34.197499] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:19.348 [2024-07-25 12:12:34.197609] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61998 ] 00:05:19.607 [2024-07-25 12:12:34.335398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.607 [2024-07-25 12:12:34.459345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.981 test_start 00:05:20.981 test_end 00:05:20.981 Performance: 377292 events per second 00:05:20.981 ************************************ 00:05:20.981 END TEST event_reactor_perf 00:05:20.981 ************************************ 00:05:20.981 00:05:20.981 real 0m1.363s 00:05:20.981 user 0m1.201s 00:05:20.981 sys 0m0.057s 00:05:20.981 12:12:35 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.981 12:12:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.981 12:12:35 event -- event/event.sh@49 -- # uname -s 00:05:20.981 12:12:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:20.981 12:12:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:20.981 12:12:35 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.981 12:12:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.981 12:12:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.981 ************************************ 00:05:20.981 START TEST event_scheduler 00:05:20.981 ************************************ 00:05:20.981 12:12:35 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:20.981 * Looking for test storage... 00:05:20.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:20.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.981 12:12:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:20.981 12:12:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62059 00:05:20.981 12:12:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.981 12:12:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62059 00:05:20.981 12:12:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:20.981 12:12:35 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 62059 ']' 00:05:20.981 12:12:35 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.981 12:12:35 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.981 12:12:35 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.981 12:12:35 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.981 12:12:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.981 [2024-07-25 12:12:35.726370] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:20.981 [2024-07-25 12:12:35.726489] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62059 ] 00:05:21.239 [2024-07-25 12:12:35.861722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.239 [2024-07-25 12:12:35.980710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.239 [2024-07-25 12:12:35.980855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.239 [2024-07-25 12:12:35.980968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.239 [2024-07-25 12:12:35.980974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:22.174 12:12:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.174 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.174 POWER: Cannot set governor of lcore 0 to performance 00:05:22.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.174 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.174 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.174 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:22.174 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:22.174 POWER: Unable to set Power Management Environment for lcore 0 00:05:22.174 [2024-07-25 12:12:36.748319] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:22.174 [2024-07-25 12:12:36.748433] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:22.174 [2024-07-25 12:12:36.748480] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:22.174 [2024-07-25 12:12:36.748626] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:22.174 [2024-07-25 12:12:36.748721] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:22.174 [2024-07-25 12:12:36.748840] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.174 12:12:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.174 [2024-07-25 12:12:36.841826] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.174 12:12:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.174 12:12:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.174 ************************************ 00:05:22.174 START TEST scheduler_create_thread 00:05:22.175 ************************************ 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 2 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 3 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 4 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 5 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 6 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 7 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 8 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 9 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 10 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.175 12:12:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.741 12:12:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.741 00:05:22.741 real 0m0.590s 00:05:22.741 user 0m0.021s 00:05:22.741 sys 0m0.002s 00:05:22.741 12:12:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.741 ************************************ 00:05:22.741 END TEST scheduler_create_thread 00:05:22.741 ************************************ 00:05:22.741 12:12:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.741 12:12:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:22.741 12:12:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62059 00:05:22.741 12:12:37 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 62059 ']' 00:05:22.741 12:12:37 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 62059 00:05:22.741 12:12:37 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:22.741 12:12:37 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.741 12:12:37 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62059 00:05:22.741 killing process with pid 62059 00:05:22.741 12:12:37 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:22.741 12:12:37 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:22.741 12:12:37 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62059' 00:05:22.741 12:12:37 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 62059 00:05:22.741 12:12:37 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 62059 00:05:23.307 [2024-07-25 12:12:37.924182] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:23.307 00:05:23.307 real 0m2.556s 00:05:23.307 user 0m5.295s 00:05:23.307 sys 0m0.357s 00:05:23.307 12:12:38 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.307 ************************************ 00:05:23.307 END TEST event_scheduler 00:05:23.307 ************************************ 00:05:23.307 12:12:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.566 12:12:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:23.566 12:12:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:23.566 12:12:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.566 12:12:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.566 12:12:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.566 ************************************ 00:05:23.566 START TEST app_repeat 00:05:23.566 ************************************ 00:05:23.566 12:12:38 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:23.566 12:12:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.566 12:12:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.566 12:12:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:23.567 Process app_repeat pid: 62155 00:05:23.567 spdk_app_start Round 0 00:05:23.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62155 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62155' 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:23.567 12:12:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62155 /var/tmp/spdk-nbd.sock 00:05:23.567 12:12:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62155 ']' 00:05:23.567 12:12:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.567 12:12:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.567 12:12:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.567 12:12:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.567 12:12:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.567 [2024-07-25 12:12:38.240389] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:23.567 [2024-07-25 12:12:38.240487] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62155 ] 00:05:23.567 [2024-07-25 12:12:38.381789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.826 [2024-07-25 12:12:38.518716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.826 [2024-07-25 12:12:38.518729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.393 12:12:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.393 12:12:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:24.393 12:12:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.958 Malloc0 00:05:24.958 12:12:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.217 Malloc1 00:05:25.217 12:12:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.217 12:12:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.476 /dev/nbd0 00:05:25.476 12:12:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.476 12:12:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.476 1+0 records in 00:05:25.476 1+0 records out 00:05:25.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246497 s, 16.6 MB/s 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.476 12:12:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.476 12:12:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.476 12:12:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.476 12:12:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.734 /dev/nbd1 00:05:25.734 12:12:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.734 12:12:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.734 1+0 records in 00:05:25.734 1+0 records out 00:05:25.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268264 s, 15.3 MB/s 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.734 12:12:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.734 12:12:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.734 12:12:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.734 12:12:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.734 12:12:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.734 12:12:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.992 12:12:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.992 { 00:05:25.992 "bdev_name": "Malloc0", 00:05:25.992 "nbd_device": "/dev/nbd0" 00:05:25.992 }, 00:05:25.992 { 00:05:25.992 "bdev_name": "Malloc1", 00:05:25.992 "nbd_device": "/dev/nbd1" 00:05:25.992 } 00:05:25.992 ]' 00:05:25.992 12:12:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.992 { 00:05:25.992 "bdev_name": "Malloc0", 00:05:25.992 "nbd_device": "/dev/nbd0" 00:05:25.992 }, 00:05:25.992 { 00:05:25.992 "bdev_name": "Malloc1", 00:05:25.992 "nbd_device": "/dev/nbd1" 00:05:25.992 } 00:05:25.992 ]' 00:05:25.992 12:12:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.251 /dev/nbd1' 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.251 /dev/nbd1' 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.251 256+0 records in 00:05:26.251 256+0 records out 00:05:26.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00776583 s, 135 MB/s 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.251 256+0 records in 00:05:26.251 256+0 records out 00:05:26.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250348 s, 41.9 MB/s 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.251 256+0 records in 00:05:26.251 256+0 records out 00:05:26.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271106 s, 38.7 MB/s 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.251 12:12:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.510 12:12:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.510 12:12:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.510 12:12:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.510 12:12:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.510 12:12:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.510 12:12:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.510 12:12:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.510 12:12:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.510 12:12:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.510 12:12:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.767 12:12:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.026 12:12:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.026 12:12:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.285 12:12:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.543 [2024-07-25 12:12:42.325193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.802 [2024-07-25 12:12:42.432797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.802 [2024-07-25 12:12:42.432808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.802 [2024-07-25 12:12:42.487578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.802 [2024-07-25 12:12:42.487643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.332 12:12:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.332 12:12:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:30.332 spdk_app_start Round 1 00:05:30.332 12:12:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62155 /var/tmp/spdk-nbd.sock 00:05:30.332 12:12:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62155 ']' 00:05:30.332 12:12:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.332 12:12:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.332 12:12:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.332 12:12:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.332 12:12:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.591 12:12:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.591 12:12:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:30.591 12:12:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.849 Malloc0 00:05:30.849 12:12:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.106 Malloc1 00:05:31.365 12:12:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.365 12:12:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.365 /dev/nbd0 00:05:31.623 12:12:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.623 12:12:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.623 1+0 records in 00:05:31.623 1+0 records out 00:05:31.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420924 s, 9.7 MB/s 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:31.623 12:12:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:31.623 12:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.623 12:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.623 12:12:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.882 /dev/nbd1 00:05:31.882 12:12:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.882 12:12:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.882 1+0 records in 00:05:31.882 1+0 records out 00:05:31.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335236 s, 12.2 MB/s 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:31.882 12:12:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:31.882 12:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.882 12:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.882 12:12:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.882 12:12:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.882 12:12:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.143 { 00:05:32.143 "bdev_name": "Malloc0", 00:05:32.143 "nbd_device": "/dev/nbd0" 00:05:32.143 }, 00:05:32.143 { 00:05:32.143 "bdev_name": "Malloc1", 00:05:32.143 "nbd_device": "/dev/nbd1" 00:05:32.143 } 00:05:32.143 ]' 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.143 { 00:05:32.143 "bdev_name": "Malloc0", 00:05:32.143 "nbd_device": "/dev/nbd0" 00:05:32.143 }, 00:05:32.143 { 00:05:32.143 "bdev_name": "Malloc1", 00:05:32.143 "nbd_device": "/dev/nbd1" 00:05:32.143 } 00:05:32.143 ]' 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.143 /dev/nbd1' 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.143 /dev/nbd1' 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.143 256+0 records in 00:05:32.143 256+0 records out 00:05:32.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00580612 s, 181 MB/s 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.143 256+0 records in 00:05:32.143 256+0 records out 00:05:32.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249797 s, 42.0 MB/s 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.143 12:12:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.143 256+0 records in 00:05:32.143 256+0 records out 00:05:32.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320793 s, 32.7 MB/s 00:05:32.143 12:12:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.401 12:12:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.659 12:12:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.659 12:12:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.659 12:12:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.659 12:12:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.659 12:12:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.659 12:12:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.659 12:12:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.659 12:12:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.659 12:12:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.659 12:12:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.917 12:12:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.917 12:12:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.917 12:12:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.917 12:12:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.917 12:12:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.917 12:12:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.917 12:12:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.917 12:12:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.917 12:12:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.917 12:12:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.918 12:12:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.175 12:12:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.175 12:12:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.433 12:12:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.691 [2024-07-25 12:12:48.373908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.691 [2024-07-25 12:12:48.479807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.691 [2024-07-25 12:12:48.479821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.691 [2024-07-25 12:12:48.534810] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.691 [2024-07-25 12:12:48.534877] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.977 12:12:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.977 spdk_app_start Round 2 00:05:36.977 12:12:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:36.977 12:12:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62155 /var/tmp/spdk-nbd.sock 00:05:36.977 12:12:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62155 ']' 00:05:36.977 12:12:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.977 12:12:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.977 12:12:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.977 12:12:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.977 12:12:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.977 12:12:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.977 12:12:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:36.977 12:12:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.977 Malloc0 00:05:36.977 12:12:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.236 Malloc1 00:05:37.236 12:12:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.236 12:12:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.494 /dev/nbd0 00:05:37.494 12:12:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.494 12:12:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.494 1+0 records in 00:05:37.494 1+0 records out 00:05:37.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330411 s, 12.4 MB/s 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.494 12:12:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.494 12:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.494 12:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.494 12:12:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.752 /dev/nbd1 00:05:38.011 12:12:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.011 12:12:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.011 1+0 records in 00:05:38.011 1+0 records out 00:05:38.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341955 s, 12.0 MB/s 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.011 12:12:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.011 12:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.011 12:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.011 12:12:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.011 12:12:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.011 12:12:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.268 { 00:05:38.268 "bdev_name": "Malloc0", 00:05:38.268 "nbd_device": "/dev/nbd0" 00:05:38.268 }, 00:05:38.268 { 00:05:38.268 "bdev_name": "Malloc1", 00:05:38.268 "nbd_device": "/dev/nbd1" 00:05:38.268 } 00:05:38.268 ]' 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.268 { 00:05:38.268 "bdev_name": "Malloc0", 00:05:38.268 "nbd_device": "/dev/nbd0" 00:05:38.268 }, 00:05:38.268 { 00:05:38.268 "bdev_name": "Malloc1", 00:05:38.268 "nbd_device": "/dev/nbd1" 00:05:38.268 } 00:05:38.268 ]' 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.268 /dev/nbd1' 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.268 /dev/nbd1' 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.268 256+0 records in 00:05:38.268 256+0 records out 00:05:38.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00772628 s, 136 MB/s 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.268 256+0 records in 00:05:38.268 256+0 records out 00:05:38.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254992 s, 41.1 MB/s 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.268 12:12:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.268 256+0 records in 00:05:38.269 256+0 records out 00:05:38.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264898 s, 39.6 MB/s 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.269 12:12:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.526 12:12:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.526 12:12:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.526 12:12:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.526 12:12:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.527 12:12:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.527 12:12:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.527 12:12:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.527 12:12:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.527 12:12:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.527 12:12:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.783 12:12:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.042 12:12:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.042 12:12:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.042 12:12:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.314 12:12:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.315 12:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.315 12:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.315 12:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.315 12:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.315 12:12:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.315 12:12:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.315 12:12:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.315 12:12:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.315 12:12:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.572 12:12:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.572 [2024-07-25 12:12:54.414857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.830 [2024-07-25 12:12:54.511341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.830 [2024-07-25 12:12:54.511347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.830 [2024-07-25 12:12:54.565403] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.830 [2024-07-25 12:12:54.565471] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.357 12:12:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62155 /var/tmp/spdk-nbd.sock 00:05:42.357 12:12:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62155 ']' 00:05:42.357 12:12:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.357 12:12:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.357 12:12:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.357 12:12:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.357 12:12:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:42.923 12:12:57 event.app_repeat -- event/event.sh@39 -- # killprocess 62155 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 62155 ']' 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 62155 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62155 00:05:42.923 killing process with pid 62155 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62155' 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@969 -- # kill 62155 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@974 -- # wait 62155 00:05:42.923 spdk_app_start is called in Round 0. 00:05:42.923 Shutdown signal received, stop current app iteration 00:05:42.923 Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 reinitialization... 00:05:42.923 spdk_app_start is called in Round 1. 00:05:42.923 Shutdown signal received, stop current app iteration 00:05:42.923 Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 reinitialization... 00:05:42.923 spdk_app_start is called in Round 2. 00:05:42.923 Shutdown signal received, stop current app iteration 00:05:42.923 Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 reinitialization... 00:05:42.923 spdk_app_start is called in Round 3. 00:05:42.923 Shutdown signal received, stop current app iteration 00:05:42.923 ************************************ 00:05:42.923 END TEST app_repeat 00:05:42.923 ************************************ 00:05:42.923 12:12:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:42.923 12:12:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:42.923 00:05:42.923 real 0m19.510s 00:05:42.923 user 0m43.654s 00:05:42.923 sys 0m3.172s 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.923 12:12:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.923 12:12:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:42.923 12:12:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.923 12:12:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.923 12:12:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.923 12:12:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.923 ************************************ 00:05:42.923 START TEST cpu_locks 00:05:42.923 ************************************ 00:05:42.923 12:12:57 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.181 * Looking for test storage... 00:05:43.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:43.181 12:12:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:43.181 12:12:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:43.181 12:12:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:43.181 12:12:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:43.181 12:12:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.181 12:12:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.181 12:12:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.181 ************************************ 00:05:43.181 START TEST default_locks 00:05:43.181 ************************************ 00:05:43.181 12:12:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:43.181 12:12:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62786 00:05:43.181 12:12:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62786 00:05:43.181 12:12:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.181 12:12:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 62786 ']' 00:05:43.181 12:12:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.181 12:12:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.181 12:12:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.181 12:12:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.181 12:12:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.181 [2024-07-25 12:12:57.918695] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:43.181 [2024-07-25 12:12:57.918796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62786 ] 00:05:43.439 [2024-07-25 12:12:58.048692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.439 [2024-07-25 12:12:58.165151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.374 12:12:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.374 12:12:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:44.374 12:12:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62786 00:05:44.374 12:12:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.374 12:12:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62786 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62786 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 62786 ']' 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 62786 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62786 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.633 killing process with pid 62786 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62786' 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 62786 00:05:44.633 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 62786 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62786 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62786 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 62786 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 62786 ']' 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.891 ERROR: process (pid: 62786) is no longer running 00:05:44.891 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (62786) - No such process 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.891 00:05:44.891 real 0m1.887s 00:05:44.891 user 0m2.015s 00:05:44.891 sys 0m0.567s 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.891 ************************************ 00:05:44.891 END TEST default_locks 00:05:44.891 ************************************ 00:05:44.891 12:12:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.149 12:12:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:45.149 12:12:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.149 12:12:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.149 12:12:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.149 ************************************ 00:05:45.149 START TEST default_locks_via_rpc 00:05:45.149 ************************************ 00:05:45.149 12:12:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:45.149 12:12:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62850 00:05:45.149 12:12:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62850 00:05:45.149 12:12:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.149 12:12:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62850 ']' 00:05:45.149 12:12:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.149 12:12:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.149 12:12:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.149 12:12:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.149 12:12:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.149 [2024-07-25 12:12:59.868777] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:45.149 [2024-07-25 12:12:59.868891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62850 ] 00:05:45.149 [2024-07-25 12:13:00.007491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.406 [2024-07-25 12:13:00.120913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62850 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.340 12:13:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62850 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62850 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 62850 ']' 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 62850 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62850 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.598 killing process with pid 62850 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62850' 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 62850 00:05:46.598 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 62850 00:05:47.165 00:05:47.165 real 0m1.925s 00:05:47.165 user 0m2.076s 00:05:47.165 sys 0m0.575s 00:05:47.165 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.165 12:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.165 ************************************ 00:05:47.165 END TEST default_locks_via_rpc 00:05:47.165 ************************************ 00:05:47.165 12:13:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:47.165 12:13:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.165 12:13:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.165 12:13:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.165 ************************************ 00:05:47.165 START TEST non_locking_app_on_locked_coremask 00:05:47.165 ************************************ 00:05:47.165 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:47.165 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62919 00:05:47.165 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62919 /var/tmp/spdk.sock 00:05:47.165 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 62919 ']' 00:05:47.165 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.165 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.165 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.165 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.165 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.165 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.165 [2024-07-25 12:13:01.832686] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:47.165 [2024-07-25 12:13:01.832784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62919 ] 00:05:47.165 [2024-07-25 12:13:01.967742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.424 [2024-07-25 12:13:02.224747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62947 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62947 /var/tmp/spdk2.sock 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 62947 ']' 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.358 12:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.358 [2024-07-25 12:13:02.951420] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:48.358 [2024-07-25 12:13:02.951506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62947 ] 00:05:48.358 [2024-07-25 12:13:03.092611] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.358 [2024-07-25 12:13:03.092672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.616 [2024-07-25 12:13:03.324521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.214 12:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.214 12:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:49.214 12:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62919 00:05:49.214 12:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62919 00:05:49.214 12:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62919 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 62919 ']' 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 62919 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62919 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.147 killing process with pid 62919 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62919' 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 62919 00:05:50.147 12:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 62919 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62947 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 62947 ']' 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 62947 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62947 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.079 killing process with pid 62947 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62947' 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 62947 00:05:51.079 12:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 62947 00:05:51.388 00:05:51.388 real 0m4.255s 00:05:51.388 user 0m4.541s 00:05:51.388 sys 0m1.430s 00:05:51.388 12:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.388 12:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.388 ************************************ 00:05:51.388 END TEST non_locking_app_on_locked_coremask 00:05:51.388 ************************************ 00:05:51.388 12:13:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.388 12:13:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.388 12:13:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.388 12:13:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.388 ************************************ 00:05:51.388 START TEST locking_app_on_unlocked_coremask 00:05:51.388 ************************************ 00:05:51.388 12:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:51.388 12:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63026 00:05:51.388 12:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63026 /var/tmp/spdk.sock 00:05:51.388 12:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.388 12:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63026 ']' 00:05:51.388 12:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.388 12:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.388 12:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.388 12:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.388 12:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.388 [2024-07-25 12:13:06.136823] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:51.388 [2024-07-25 12:13:06.136912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63026 ] 00:05:51.647 [2024-07-25 12:13:06.270768] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.647 [2024-07-25 12:13:06.270824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.647 [2024-07-25 12:13:06.379252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63054 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63054 /var/tmp/spdk2.sock 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63054 ']' 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.579 12:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.579 [2024-07-25 12:13:07.202196] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:52.579 [2024-07-25 12:13:07.202318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63054 ] 00:05:52.579 [2024-07-25 12:13:07.348755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.837 [2024-07-25 12:13:07.572833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.402 12:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.402 12:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.402 12:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63054 00:05:53.402 12:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63054 00:05:53.402 12:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63026 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63026 ']' 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 63026 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63026 00:05:54.333 killing process with pid 63026 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63026' 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 63026 00:05:54.333 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 63026 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63054 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63054 ']' 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 63054 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63054 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.264 killing process with pid 63054 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63054' 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 63054 00:05:55.264 12:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 63054 00:05:55.522 ************************************ 00:05:55.522 END TEST locking_app_on_unlocked_coremask 00:05:55.522 ************************************ 00:05:55.522 00:05:55.522 real 0m4.189s 00:05:55.522 user 0m4.754s 00:05:55.522 sys 0m1.109s 00:05:55.522 12:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.522 12:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.522 12:13:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:55.522 12:13:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.522 12:13:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.522 12:13:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.522 ************************************ 00:05:55.523 START TEST locking_app_on_locked_coremask 00:05:55.523 ************************************ 00:05:55.523 12:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:55.523 12:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63133 00:05:55.523 12:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.523 12:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63133 /var/tmp/spdk.sock 00:05:55.523 12:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63133 ']' 00:05:55.523 12:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.523 12:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.523 12:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.523 12:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.523 12:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.781 [2024-07-25 12:13:10.394450] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:55.781 [2024-07-25 12:13:10.394559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63133 ] 00:05:55.781 [2024-07-25 12:13:10.532072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.039 [2024-07-25 12:13:10.693956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.605 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.605 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:56.605 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63161 00:05:56.605 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63161 /var/tmp/spdk2.sock 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63161 /var/tmp/spdk2.sock 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:56.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 63161 /var/tmp/spdk2.sock 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63161 ']' 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.606 12:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.863 [2024-07-25 12:13:11.490919] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:56.863 [2024-07-25 12:13:11.491686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63161 ] 00:05:56.864 [2024-07-25 12:13:11.635625] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63133 has claimed it. 00:05:56.864 [2024-07-25 12:13:11.635689] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.429 ERROR: process (pid: 63161) is no longer running 00:05:57.429 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63161) - No such process 00:05:57.429 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.429 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:57.429 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:57.429 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.429 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.429 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.429 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63133 00:05:57.429 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63133 00:05:57.429 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.994 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63133 00:05:57.994 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63133 ']' 00:05:57.994 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63133 00:05:57.994 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.994 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.994 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63133 00:05:57.994 killing process with pid 63133 00:05:57.994 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.994 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.994 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63133' 00:05:57.994 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63133 00:05:57.995 12:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63133 00:05:58.252 00:05:58.252 real 0m2.767s 00:05:58.252 user 0m3.228s 00:05:58.252 sys 0m0.713s 00:05:58.253 12:13:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.253 12:13:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.253 ************************************ 00:05:58.253 END TEST locking_app_on_locked_coremask 00:05:58.253 ************************************ 00:05:58.511 12:13:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:58.511 12:13:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.511 12:13:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.511 12:13:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.511 ************************************ 00:05:58.511 START TEST locking_overlapped_coremask 00:05:58.511 ************************************ 00:05:58.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.511 12:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:58.511 12:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63218 00:05:58.511 12:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63218 /var/tmp/spdk.sock 00:05:58.511 12:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:58.511 12:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 63218 ']' 00:05:58.511 12:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.511 12:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.511 12:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.511 12:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.511 12:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.511 [2024-07-25 12:13:13.209817] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:58.511 [2024-07-25 12:13:13.209917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63218 ] 00:05:58.511 [2024-07-25 12:13:13.349563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.769 [2024-07-25 12:13:13.465023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.769 [2024-07-25 12:13:13.465117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.769 [2024-07-25 12:13:13.465137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63248 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63248 /var/tmp/spdk2.sock 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63248 /var/tmp/spdk2.sock 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 63248 /var/tmp/spdk2.sock 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 63248 ']' 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.335 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.593 [2024-07-25 12:13:14.202972] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:05:59.593 [2024-07-25 12:13:14.203257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63248 ] 00:05:59.593 [2024-07-25 12:13:14.346272] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63218 has claimed it. 00:05:59.593 [2024-07-25 12:13:14.346541] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.157 ERROR: process (pid: 63248) is no longer running 00:06:00.157 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63248) - No such process 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63218 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 63218 ']' 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 63218 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63218 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.157 killing process with pid 63218 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63218' 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 63218 00:06:00.157 12:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 63218 00:06:00.723 ************************************ 00:06:00.723 END TEST locking_overlapped_coremask 00:06:00.723 ************************************ 00:06:00.723 00:06:00.723 real 0m2.233s 00:06:00.723 user 0m6.153s 00:06:00.724 sys 0m0.425s 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.724 12:13:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:00.724 12:13:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.724 12:13:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.724 12:13:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.724 ************************************ 00:06:00.724 START TEST locking_overlapped_coremask_via_rpc 00:06:00.724 ************************************ 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63294 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63294 /var/tmp/spdk.sock 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63294 ']' 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.724 12:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.724 [2024-07-25 12:13:15.476467] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:06:00.724 [2024-07-25 12:13:15.477243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63294 ] 00:06:00.982 [2024-07-25 12:13:15.610629] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.982 [2024-07-25 12:13:15.610897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.982 [2024-07-25 12:13:15.722918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.982 [2024-07-25 12:13:15.723053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.982 [2024-07-25 12:13:15.723056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63324 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63324 /var/tmp/spdk2.sock 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63324 ']' 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.915 12:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.915 [2024-07-25 12:13:16.555496] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:06:01.915 [2024-07-25 12:13:16.555619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63324 ] 00:06:01.915 [2024-07-25 12:13:16.698908] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.915 [2024-07-25 12:13:16.702306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.172 [2024-07-25 12:13:16.927165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.172 [2024-07-25 12:13:16.927324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.172 [2024-07-25 12:13:16.927325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.736 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.736 [2024-07-25 12:13:17.509419] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63294 has claimed it. 00:06:02.736 2024/07/25 12:13:17 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:02.736 request: 00:06:02.736 { 00:06:02.736 "method": "framework_enable_cpumask_locks", 00:06:02.736 "params": {} 00:06:02.736 } 00:06:02.736 Got JSON-RPC error response 00:06:02.736 GoRPCClient: error on JSON-RPC call 00:06:02.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63294 /var/tmp/spdk.sock 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63294 ']' 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.737 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.994 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.994 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:02.994 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63324 /var/tmp/spdk2.sock 00:06:02.994 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63324 ']' 00:06:02.994 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.994 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.994 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.994 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.994 12:13:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.252 12:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.252 12:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.252 12:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:03.252 12:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.252 12:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.252 12:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.252 ************************************ 00:06:03.252 END TEST locking_overlapped_coremask_via_rpc 00:06:03.252 ************************************ 00:06:03.252 00:06:03.252 real 0m2.666s 00:06:03.252 user 0m1.380s 00:06:03.252 sys 0m0.232s 00:06:03.252 12:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.252 12:13:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.511 12:13:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:03.511 12:13:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63294 ]] 00:06:03.511 12:13:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63294 00:06:03.511 12:13:18 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63294 ']' 00:06:03.511 12:13:18 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63294 00:06:03.511 12:13:18 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:03.511 12:13:18 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.511 12:13:18 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63294 00:06:03.511 killing process with pid 63294 00:06:03.511 12:13:18 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.511 12:13:18 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.511 12:13:18 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63294' 00:06:03.511 12:13:18 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 63294 00:06:03.511 12:13:18 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 63294 00:06:03.769 12:13:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63324 ]] 00:06:03.769 12:13:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63324 00:06:03.769 12:13:18 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63324 ']' 00:06:03.769 12:13:18 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63324 00:06:03.769 12:13:18 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:03.769 12:13:18 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.769 12:13:18 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63324 00:06:03.769 killing process with pid 63324 00:06:03.769 12:13:18 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:03.769 12:13:18 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:03.769 12:13:18 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63324' 00:06:03.769 12:13:18 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 63324 00:06:03.769 12:13:18 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 63324 00:06:04.333 12:13:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.334 Process with pid 63294 is not found 00:06:04.334 Process with pid 63324 is not found 00:06:04.334 12:13:18 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.334 12:13:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63294 ]] 00:06:04.334 12:13:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63294 00:06:04.334 12:13:18 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63294 ']' 00:06:04.334 12:13:18 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63294 00:06:04.334 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (63294) - No such process 00:06:04.334 12:13:18 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 63294 is not found' 00:06:04.334 12:13:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63324 ]] 00:06:04.334 12:13:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63324 00:06:04.334 12:13:18 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63324 ']' 00:06:04.334 12:13:18 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63324 00:06:04.334 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (63324) - No such process 00:06:04.334 12:13:18 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 63324 is not found' 00:06:04.334 12:13:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.334 ************************************ 00:06:04.334 END TEST cpu_locks 00:06:04.334 ************************************ 00:06:04.334 00:06:04.334 real 0m21.195s 00:06:04.334 user 0m36.579s 00:06:04.334 sys 0m5.880s 00:06:04.334 12:13:18 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.334 12:13:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.334 00:06:04.334 real 0m47.720s 00:06:04.334 user 1m32.213s 00:06:04.334 sys 0m9.820s 00:06:04.334 ************************************ 00:06:04.334 END TEST event 00:06:04.334 ************************************ 00:06:04.334 12:13:19 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.334 12:13:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.334 12:13:19 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:04.334 12:13:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.334 12:13:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.334 12:13:19 -- common/autotest_common.sh@10 -- # set +x 00:06:04.334 ************************************ 00:06:04.334 START TEST thread 00:06:04.334 ************************************ 00:06:04.334 12:13:19 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:04.334 * Looking for test storage... 00:06:04.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:04.334 12:13:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.334 12:13:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:04.334 12:13:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.334 12:13:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.334 ************************************ 00:06:04.334 START TEST thread_poller_perf 00:06:04.334 ************************************ 00:06:04.334 12:13:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.334 [2024-07-25 12:13:19.154109] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:06:04.334 [2024-07-25 12:13:19.154545] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63475 ] 00:06:04.591 [2024-07-25 12:13:19.293156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.591 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.591 [2024-07-25 12:13:19.441016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.992 ====================================== 00:06:05.992 busy:2207178682 (cyc) 00:06:05.992 total_run_count: 314000 00:06:05.992 tsc_hz: 2200000000 (cyc) 00:06:05.992 ====================================== 00:06:05.992 poller_cost: 7029 (cyc), 3195 (nsec) 00:06:05.992 ************************************ 00:06:05.993 END TEST thread_poller_perf 00:06:05.993 ************************************ 00:06:05.993 00:06:05.993 real 0m1.399s 00:06:05.993 user 0m1.220s 00:06:05.993 sys 0m0.070s 00:06:05.993 12:13:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.993 12:13:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.993 12:13:20 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.993 12:13:20 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:05.993 12:13:20 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.993 12:13:20 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.993 ************************************ 00:06:05.993 START TEST thread_poller_perf 00:06:05.993 ************************************ 00:06:05.993 12:13:20 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.993 [2024-07-25 12:13:20.599873] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:06:05.993 [2024-07-25 12:13:20.599958] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63506 ] 00:06:05.993 [2024-07-25 12:13:20.730114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.993 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:05.993 [2024-07-25 12:13:20.849352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.365 ====================================== 00:06:07.365 busy:2201794307 (cyc) 00:06:07.365 total_run_count: 4174000 00:06:07.365 tsc_hz: 2200000000 (cyc) 00:06:07.365 ====================================== 00:06:07.365 poller_cost: 527 (cyc), 239 (nsec) 00:06:07.365 ************************************ 00:06:07.365 END TEST thread_poller_perf 00:06:07.365 ************************************ 00:06:07.365 00:06:07.365 real 0m1.353s 00:06:07.365 user 0m1.194s 00:06:07.365 sys 0m0.051s 00:06:07.365 12:13:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.365 12:13:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.365 12:13:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.365 ************************************ 00:06:07.365 END TEST thread 00:06:07.365 ************************************ 00:06:07.365 00:06:07.365 real 0m2.927s 00:06:07.365 user 0m2.482s 00:06:07.365 sys 0m0.221s 00:06:07.365 12:13:21 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.365 12:13:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.365 12:13:22 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:07.365 12:13:22 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:07.365 12:13:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.365 12:13:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.365 12:13:22 -- common/autotest_common.sh@10 -- # set +x 00:06:07.365 ************************************ 00:06:07.365 START TEST app_cmdline 00:06:07.365 ************************************ 00:06:07.365 12:13:22 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:07.365 * Looking for test storage... 00:06:07.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:07.365 12:13:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:07.365 12:13:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63581 00:06:07.365 12:13:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63581 00:06:07.365 12:13:22 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:07.365 12:13:22 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 63581 ']' 00:06:07.365 12:13:22 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.365 12:13:22 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.365 12:13:22 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.365 12:13:22 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.365 12:13:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.365 [2024-07-25 12:13:22.170565] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:06:07.365 [2024-07-25 12:13:22.170680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63581 ] 00:06:07.622 [2024-07-25 12:13:22.306574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.622 [2024-07-25 12:13:22.429093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:08.579 { 00:06:08.579 "fields": { 00:06:08.579 "commit": "5038b8b39", 00:06:08.579 "major": 24, 00:06:08.579 "minor": 9, 00:06:08.579 "patch": 0, 00:06:08.579 "suffix": "-pre" 00:06:08.579 }, 00:06:08.579 "version": "SPDK v24.09-pre git sha1 5038b8b39" 00:06:08.579 } 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:08.579 12:13:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:08.579 12:13:23 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.144 2024/07/25 12:13:23 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:09.144 request: 00:06:09.144 { 00:06:09.144 "method": "env_dpdk_get_mem_stats", 00:06:09.144 "params": {} 00:06:09.144 } 00:06:09.144 Got JSON-RPC error response 00:06:09.144 GoRPCClient: error on JSON-RPC call 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.144 12:13:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63581 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 63581 ']' 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 63581 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63581 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.144 killing process with pid 63581 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63581' 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@969 -- # kill 63581 00:06:09.144 12:13:23 app_cmdline -- common/autotest_common.sh@974 -- # wait 63581 00:06:09.402 00:06:09.402 real 0m2.151s 00:06:09.402 user 0m2.719s 00:06:09.402 sys 0m0.470s 00:06:09.402 12:13:24 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.402 12:13:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.402 ************************************ 00:06:09.402 END TEST app_cmdline 00:06:09.402 ************************************ 00:06:09.402 12:13:24 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:09.402 12:13:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.402 12:13:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.402 12:13:24 -- common/autotest_common.sh@10 -- # set +x 00:06:09.403 ************************************ 00:06:09.403 START TEST version 00:06:09.403 ************************************ 00:06:09.403 12:13:24 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:09.660 * Looking for test storage... 00:06:09.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:09.660 12:13:24 version -- app/version.sh@17 -- # get_header_version major 00:06:09.660 12:13:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.660 12:13:24 version -- app/version.sh@14 -- # cut -f2 00:06:09.661 12:13:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.661 12:13:24 version -- app/version.sh@17 -- # major=24 00:06:09.661 12:13:24 version -- app/version.sh@18 -- # get_header_version minor 00:06:09.661 12:13:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.661 12:13:24 version -- app/version.sh@14 -- # cut -f2 00:06:09.661 12:13:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.661 12:13:24 version -- app/version.sh@18 -- # minor=9 00:06:09.661 12:13:24 version -- app/version.sh@19 -- # get_header_version patch 00:06:09.661 12:13:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.661 12:13:24 version -- app/version.sh@14 -- # cut -f2 00:06:09.661 12:13:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.661 12:13:24 version -- app/version.sh@19 -- # patch=0 00:06:09.661 12:13:24 version -- app/version.sh@20 -- # get_header_version suffix 00:06:09.661 12:13:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.661 12:13:24 version -- app/version.sh@14 -- # cut -f2 00:06:09.661 12:13:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.661 12:13:24 version -- app/version.sh@20 -- # suffix=-pre 00:06:09.661 12:13:24 version -- app/version.sh@22 -- # version=24.9 00:06:09.661 12:13:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:09.661 12:13:24 version -- app/version.sh@28 -- # version=24.9rc0 00:06:09.661 12:13:24 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:09.661 12:13:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:09.661 12:13:24 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:09.661 12:13:24 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:09.661 00:06:09.661 real 0m0.140s 00:06:09.661 user 0m0.087s 00:06:09.661 sys 0m0.083s 00:06:09.661 12:13:24 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.661 12:13:24 version -- common/autotest_common.sh@10 -- # set +x 00:06:09.661 ************************************ 00:06:09.661 END TEST version 00:06:09.661 ************************************ 00:06:09.661 12:13:24 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:09.661 12:13:24 -- spdk/autotest.sh@202 -- # uname -s 00:06:09.661 12:13:24 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:09.661 12:13:24 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:09.661 12:13:24 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:09.661 12:13:24 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:09.661 12:13:24 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:09.661 12:13:24 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:09.661 12:13:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.661 12:13:24 -- common/autotest_common.sh@10 -- # set +x 00:06:09.661 12:13:24 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:09.661 12:13:24 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:09.661 12:13:24 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:09.661 12:13:24 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:09.661 12:13:24 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:09.661 12:13:24 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:09.661 12:13:24 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:09.661 12:13:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:09.661 12:13:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.661 12:13:24 -- common/autotest_common.sh@10 -- # set +x 00:06:09.661 ************************************ 00:06:09.661 START TEST nvmf_tcp 00:06:09.661 ************************************ 00:06:09.661 12:13:24 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:09.919 * Looking for test storage... 00:06:09.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:09.919 12:13:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:09.919 12:13:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:09.919 12:13:24 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:09.919 12:13:24 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:09.919 12:13:24 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.919 12:13:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.919 ************************************ 00:06:09.919 START TEST nvmf_target_core 00:06:09.919 ************************************ 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:09.919 * Looking for test storage... 00:06:09.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:06:09.919 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:09.920 ************************************ 00:06:09.920 START TEST nvmf_abort 00:06:09.920 ************************************ 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:09.920 * Looking for test storage... 00:06:09.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:09.920 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:09.921 Cannot find device "nvmf_init_br" 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # true 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:09.921 Cannot find device "nvmf_tgt_br" 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # true 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:09.921 Cannot find device "nvmf_tgt_br2" 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # true 00:06:09.921 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:10.178 Cannot find device "nvmf_init_br" 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # true 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:10.178 Cannot find device "nvmf_tgt_br" 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # true 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:10.178 Cannot find device "nvmf_tgt_br2" 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # true 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:10.178 Cannot find device "nvmf_br" 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # true 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:10.178 Cannot find device "nvmf_init_if" 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # true 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:10.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:06:10.178 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:10.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:10.179 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:10.436 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:10.436 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:10.436 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:10.436 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:10.436 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:10.436 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:10.436 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:10.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:10.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:06:10.437 00:06:10.437 --- 10.0.0.2 ping statistics --- 00:06:10.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.437 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:10.437 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:10.437 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:06:10.437 00:06:10.437 --- 10.0.0.3 ping statistics --- 00:06:10.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.437 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:10.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:10.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:06:10.437 00:06:10.437 --- 10.0.0.1 ping statistics --- 00:06:10.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.437 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=63959 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 63959 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 63959 ']' 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.437 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 [2024-07-25 12:13:25.202028] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:06:10.437 [2024-07-25 12:13:25.202144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.695 [2024-07-25 12:13:25.338240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.695 [2024-07-25 12:13:25.456971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:10.695 [2024-07-25 12:13:25.457026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:10.695 [2024-07-25 12:13:25.457039] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:10.695 [2024-07-25 12:13:25.457048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:10.695 [2024-07-25 12:13:25.457055] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:10.695 [2024-07-25 12:13:25.457187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.695 [2024-07-25 12:13:25.457816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.695 [2024-07-25 12:13:25.457849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.628 [2024-07-25 12:13:26.243241] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.628 Malloc0 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.628 Delay0 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.628 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.629 [2024-07-25 12:13:26.318786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.629 12:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:11.886 [2024-07-25 12:13:26.499120] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:13.789 Initializing NVMe Controllers 00:06:13.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:13.789 controller IO queue size 128 less than required 00:06:13.789 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:13.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:13.789 Initialization complete. Launching workers. 00:06:13.789 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 31312 00:06:13.789 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31377, failed to submit 62 00:06:13.789 success 31316, unsuccess 61, failed 0 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:13.789 rmmod nvme_tcp 00:06:13.789 rmmod nvme_fabrics 00:06:13.789 rmmod nvme_keyring 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 63959 ']' 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 63959 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 63959 ']' 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 63959 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63959 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:13.789 killing process with pid 63959 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63959' 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 63959 00:06:13.789 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 63959 00:06:14.046 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:14.046 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:14.046 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:14.046 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:14.046 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:14.046 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.046 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.046 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.305 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:14.305 00:06:14.305 real 0m4.289s 00:06:14.305 user 0m12.184s 00:06:14.305 sys 0m1.069s 00:06:14.305 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.305 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.305 ************************************ 00:06:14.305 END TEST nvmf_abort 00:06:14.305 ************************************ 00:06:14.305 12:13:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:14.305 12:13:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:14.305 12:13:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.305 12:13:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:14.305 ************************************ 00:06:14.305 START TEST nvmf_ns_hotplug_stress 00:06:14.305 ************************************ 00:06:14.305 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:14.305 * Looking for test storage... 00:06:14.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.305 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:14.306 Cannot find device "nvmf_tgt_br" 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:14.306 Cannot find device "nvmf_tgt_br2" 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:14.306 Cannot find device "nvmf_tgt_br" 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:14.306 Cannot find device "nvmf_tgt_br2" 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:06:14.306 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:14.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:14.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:14.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:06:14.566 00:06:14.566 --- 10.0.0.2 ping statistics --- 00:06:14.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.566 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:14.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:14.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:06:14.566 00:06:14.566 --- 10.0.0.3 ping statistics --- 00:06:14.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.566 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:06:14.566 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:14.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:06:14.824 00:06:14.824 --- 10.0.0.1 ping statistics --- 00:06:14.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.824 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=64226 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 64226 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 64226 ']' 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.824 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.824 [2024-07-25 12:13:29.510010] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:06:14.824 [2024-07-25 12:13:29.510123] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.824 [2024-07-25 12:13:29.644270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.081 [2024-07-25 12:13:29.763947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.081 [2024-07-25 12:13:29.764199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.081 [2024-07-25 12:13:29.764400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.081 [2024-07-25 12:13:29.764604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.081 [2024-07-25 12:13:29.764641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.081 [2024-07-25 12:13:29.764903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.082 [2024-07-25 12:13:29.765428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.082 [2024-07-25 12:13:29.765438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.015 12:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.015 12:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:16.015 12:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:16.015 12:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:16.015 12:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.015 12:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.015 12:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:16.015 12:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:16.015 [2024-07-25 12:13:30.825493] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.015 12:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:16.273 12:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.530 [2024-07-25 12:13:31.348422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.530 12:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:17.095 12:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:17.095 Malloc0 00:06:17.095 12:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:17.660 Delay0 00:06:17.660 12:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.660 12:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:18.226 NULL1 00:06:18.226 12:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:18.483 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:18.483 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=64368 00:06:18.483 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:18.483 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.853 Read completed with error (sct=0, sc=11) 00:06:19.853 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.110 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:20.110 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:20.366 true 00:06:20.366 12:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:20.366 12:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.931 12:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.188 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:21.188 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:21.446 true 00:06:21.446 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:21.446 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.704 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.962 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:21.962 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:22.527 true 00:06:22.527 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:22.527 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.527 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.093 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:23.093 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:23.093 true 00:06:23.351 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:23.351 12:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.918 12:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.484 12:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:24.484 12:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:24.484 true 00:06:24.484 12:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:24.484 12:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.741 12:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.999 12:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:24.999 12:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:25.257 true 00:06:25.257 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:25.257 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.191 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.191 12:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:26.191 12:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:26.449 true 00:06:26.449 12:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:26.449 12:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.707 12:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.964 12:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:26.964 12:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:27.223 true 00:06:27.223 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:27.223 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.158 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.415 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:28.415 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:28.415 true 00:06:28.415 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:28.415 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.674 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.239 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:29.239 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:29.239 true 00:06:29.239 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:29.239 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.171 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.430 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:30.430 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:30.687 true 00:06:30.687 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:30.687 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.990 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.248 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:31.248 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:31.506 true 00:06:31.506 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:31.506 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.768 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.026 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:32.026 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:32.284 true 00:06:32.285 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:32.285 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.219 12:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.477 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:33.477 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:33.770 true 00:06:33.770 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:33.770 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.029 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.029 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:34.029 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:34.287 true 00:06:34.287 12:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:34.287 12:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.545 12:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.803 12:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:34.803 12:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:35.061 true 00:06:35.061 12:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:35.061 12:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.994 12:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.579 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:36.579 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:36.837 true 00:06:36.837 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:36.837 12:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.227 12:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.485 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:38.485 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:38.744 true 00:06:38.744 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:38.744 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.676 12:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.676 12:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:39.676 12:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:39.935 true 00:06:39.935 12:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:39.935 12:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.192 12:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.450 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:40.450 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:40.708 true 00:06:40.708 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:40.708 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.641 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.641 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:41.641 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:41.901 true 00:06:41.901 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:41.901 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.160 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.440 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:42.440 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:42.698 true 00:06:42.698 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:42.698 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.631 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.889 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:43.889 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:43.889 true 00:06:44.147 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:44.147 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.404 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.404 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:44.404 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:44.662 true 00:06:44.662 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:44.662 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.921 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.179 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:45.179 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:45.438 true 00:06:45.438 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:45.438 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.813 12:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.813 12:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:46.813 12:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:47.071 true 00:06:47.071 12:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:47.071 12:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.007 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.007 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:48.007 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:48.265 true 00:06:48.265 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:48.265 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.830 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.830 Initializing NVMe Controllers 00:06:48.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:48.830 Controller IO queue size 128, less than required. 00:06:48.831 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.831 Controller IO queue size 128, less than required. 00:06:48.831 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:48.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:48.831 Initialization complete. Launching workers. 00:06:48.831 ======================================================== 00:06:48.831 Latency(us) 00:06:48.831 Device Information : IOPS MiB/s Average min max 00:06:48.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 978.13 0.48 69057.89 3309.13 1039173.91 00:06:48.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11030.10 5.39 11604.46 3114.70 640327.24 00:06:48.831 ======================================================== 00:06:48.831 Total : 12008.23 5.86 16284.33 3114.70 1039173.91 00:06:48.831 00:06:48.831 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:48.831 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:49.088 true 00:06:49.088 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64368 00:06:49.088 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (64368) - No such process 00:06:49.088 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 64368 00:06:49.088 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.345 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.603 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:49.603 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:49.603 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:49.603 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.603 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:49.861 null0 00:06:49.861 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.861 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.861 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:50.119 null1 00:06:50.119 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.119 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.119 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:50.378 null2 00:06:50.378 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.378 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.378 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:50.635 null3 00:06:50.635 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.635 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.635 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:50.893 null4 00:06:50.893 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.893 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.893 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:51.151 null5 00:06:51.151 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.151 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.151 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:51.408 null6 00:06:51.408 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.408 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.408 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:51.666 null7 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.666 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.667 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 65388 65389 65392 65393 65394 65396 65399 65400 00:06:51.925 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.925 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.925 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.925 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.925 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.925 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.183 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.183 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.183 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.183 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.183 12:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.183 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.183 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.183 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.184 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.184 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.184 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.184 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.184 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.184 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.442 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.701 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.959 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.959 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.959 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.959 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.959 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.959 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.959 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.959 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.960 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.960 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.960 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.960 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.960 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.960 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.960 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.960 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.255 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.255 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.255 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.255 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.255 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.255 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.255 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.255 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.255 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.255 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.255 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.255 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.551 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.810 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.069 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.328 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.328 12:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.328 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.328 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.328 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.328 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.328 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.328 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.328 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.328 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.328 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.587 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.846 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.104 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.363 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.363 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.363 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.363 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.363 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.363 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.363 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.363 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.363 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.622 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.881 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.141 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.400 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.659 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.918 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.919 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.178 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.436 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.436 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.436 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.436 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.436 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.436 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.436 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.436 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:57.696 rmmod nvme_tcp 00:06:57.696 rmmod nvme_fabrics 00:06:57.696 rmmod nvme_keyring 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 64226 ']' 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 64226 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 64226 ']' 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 64226 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64226 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64226' 00:06:57.696 killing process with pid 64226 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 64226 00:06:57.696 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 64226 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:57.955 ************************************ 00:06:57.955 END TEST nvmf_ns_hotplug_stress 00:06:57.955 ************************************ 00:06:57.955 00:06:57.955 real 0m43.823s 00:06:57.955 user 3m31.806s 00:06:57.955 sys 0m13.165s 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.955 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:58.214 ************************************ 00:06:58.214 START TEST nvmf_delete_subsystem 00:06:58.214 ************************************ 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:58.214 * Looking for test storage... 00:06:58.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.214 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:58.215 Cannot find device "nvmf_tgt_br" 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:58.215 Cannot find device "nvmf_tgt_br2" 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:58.215 Cannot find device "nvmf_tgt_br" 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:06:58.215 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:58.215 Cannot find device "nvmf_tgt_br2" 00:06:58.215 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:06:58.215 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:58.215 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:58.215 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:58.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:58.215 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:06:58.215 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:58.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:58.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:06:58.474 00:06:58.474 --- 10.0.0.2 ping statistics --- 00:06:58.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.474 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:58.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:58.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:06:58.474 00:06:58.474 --- 10.0.0.3 ping statistics --- 00:06:58.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.474 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:06:58.474 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:58.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:06:58.474 00:06:58.474 --- 10.0.0.1 ping statistics --- 00:06:58.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.474 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=66723 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 66723 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 66723 ']' 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.475 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.732 [2024-07-25 12:14:13.343486] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:06:58.732 [2024-07-25 12:14:13.343587] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.732 [2024-07-25 12:14:13.485180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.989 [2024-07-25 12:14:13.619385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.989 [2024-07-25 12:14:13.619442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.989 [2024-07-25 12:14:13.619456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.989 [2024-07-25 12:14:13.619467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.989 [2024-07-25 12:14:13.619476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.989 [2024-07-25 12:14:13.619649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.989 [2024-07-25 12:14:13.619836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.555 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.555 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:59.555 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:59.555 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.555 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.555 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.555 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.555 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.555 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.555 [2024-07-25 12:14:14.416344] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.813 [2024-07-25 12:14:14.432456] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.813 NULL1 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.813 Delay0 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=66774 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:59.813 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:59.813 [2024-07-25 12:14:14.637407] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:01.714 12:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:01.714 12:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.714 12:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 starting I/O failed: -6 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 [2024-07-25 12:14:16.675894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff0ec00d330 is same with the state(5) to be set 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Read completed with error (sct=0, sc=8) 00:07:01.973 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 [2024-07-25 12:14:16.676361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f910 is same with the state(5) to be set 00:07:01.974 starting I/O failed: -6 00:07:01.974 starting I/O failed: -6 00:07:01.974 starting I/O failed: -6 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:01.974 Read completed with error (sct=0, sc=8) 00:07:01.974 Write completed with error (sct=0, sc=8) 00:07:02.912 [2024-07-25 12:14:17.655891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a70510 is same with the state(5) to be set 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 [2024-07-25 12:14:17.673411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff0ec00d660 is same with the state(5) to be set 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 [2024-07-25 12:14:17.674264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93a80 is same with the state(5) to be set 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 [2024-07-25 12:14:17.675137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff0ec00d000 is same with the state(5) to be set 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Initializing NVMe Controllers 00:07:02.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.912 Controller IO queue size 128, less than required. 00:07:02.912 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:02.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:02.912 Initialization complete. Launching workers. 00:07:02.912 ======================================================== 00:07:02.912 Latency(us) 00:07:02.912 Device Information : IOPS MiB/s Average min max 00:07:02.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.15 0.09 883953.15 487.16 1012963.43 00:07:02.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.32 0.07 997003.48 283.99 2003309.13 00:07:02.912 ======================================================== 00:07:02.912 Total : 327.47 0.16 936538.68 283.99 2003309.13 00:07:02.912 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Read completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 Write completed with error (sct=0, sc=8) 00:07:02.912 [2024-07-25 12:14:17.675627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a926c0 is same with the state(5) to be set 00:07:02.912 [2024-07-25 12:14:17.676448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a70510 (9): Bad file descriptor 00:07:02.912 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:02.912 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.912 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:02.912 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66774 00:07:02.912 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66774 00:07:03.480 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (66774) - No such process 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 66774 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 66774 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 66774 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.480 [2024-07-25 12:14:18.203374] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=66825 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66825 00:07:03.480 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.739 [2024-07-25 12:14:18.381327] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:03.997 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.997 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66825 00:07:03.997 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.564 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.564 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66825 00:07:04.564 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.131 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.131 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66825 00:07:05.131 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.390 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.390 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66825 00:07:05.390 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.957 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.957 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66825 00:07:05.957 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.522 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.522 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66825 00:07:06.522 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.780 Initializing NVMe Controllers 00:07:06.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.780 Controller IO queue size 128, less than required. 00:07:06.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:06.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:06.780 Initialization complete. Launching workers. 00:07:06.780 ======================================================== 00:07:06.780 Latency(us) 00:07:06.780 Device Information : IOPS MiB/s Average min max 00:07:06.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003500.55 1000170.90 1042270.77 00:07:06.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005433.41 1000189.45 1013868.89 00:07:06.780 ======================================================== 00:07:06.780 Total : 256.00 0.12 1004466.98 1000170.90 1042270.77 00:07:06.780 00:07:07.038 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.038 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66825 00:07:07.038 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (66825) - No such process 00:07:07.038 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 66825 00:07:07.039 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:07.039 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:07.039 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.039 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.972 rmmod nvme_tcp 00:07:07.972 rmmod nvme_fabrics 00:07:07.972 rmmod nvme_keyring 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 66723 ']' 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 66723 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 66723 ']' 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 66723 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66723 00:07:07.972 killing process with pid 66723 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66723' 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 66723 00:07:07.972 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 66723 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:08.232 00:07:08.232 real 0m10.079s 00:07:08.232 user 0m30.330s 00:07:08.232 sys 0m1.521s 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.232 ************************************ 00:07:08.232 END TEST nvmf_delete_subsystem 00:07:08.232 ************************************ 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:08.232 ************************************ 00:07:08.232 START TEST nvmf_host_management 00:07:08.232 ************************************ 00:07:08.232 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:08.232 * Looking for test storage... 00:07:08.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:08.232 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:08.233 Cannot find device "nvmf_tgt_br" 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:08.233 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:08.491 Cannot find device "nvmf_tgt_br2" 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:08.491 Cannot find device "nvmf_tgt_br" 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:08.491 Cannot find device "nvmf_tgt_br2" 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:08.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:08.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:08.491 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:08.492 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:08.492 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:08.492 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:08.492 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:08.492 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:08.492 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:08.492 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:08.492 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:08.492 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:08.492 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:08.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:07:08.762 00:07:08.762 --- 10.0.0.2 ping statistics --- 00:07:08.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.762 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:08.762 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:08.762 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:07:08.762 00:07:08.762 --- 10.0.0.3 ping statistics --- 00:07:08.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.762 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:08.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:07:08.762 00:07:08.762 --- 10.0.0.1 ping statistics --- 00:07:08.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.762 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=67075 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 67075 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 67075 ']' 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.762 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.762 [2024-07-25 12:14:23.469618] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:07:08.762 [2024-07-25 12:14:23.469736] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.762 [2024-07-25 12:14:23.607081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.048 [2024-07-25 12:14:23.732561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.048 [2024-07-25 12:14:23.732624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.048 [2024-07-25 12:14:23.732638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.048 [2024-07-25 12:14:23.732647] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.048 [2024-07-25 12:14:23.732655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.048 [2024-07-25 12:14:23.733254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.048 [2024-07-25 12:14:23.733393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.048 [2024-07-25 12:14:23.733708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:09.048 [2024-07-25 12:14:23.733755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.615 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.615 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:09.615 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.615 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.615 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.874 [2024-07-25 12:14:24.517538] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.874 Malloc0 00:07:09.874 [2024-07-25 12:14:24.590655] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=67148 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 67148 /var/tmp/bdevperf.sock 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 67148 ']' 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:09.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:09.874 { 00:07:09.874 "params": { 00:07:09.874 "name": "Nvme$subsystem", 00:07:09.874 "trtype": "$TEST_TRANSPORT", 00:07:09.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:09.874 "adrfam": "ipv4", 00:07:09.874 "trsvcid": "$NVMF_PORT", 00:07:09.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:09.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:09.874 "hdgst": ${hdgst:-false}, 00:07:09.874 "ddgst": ${ddgst:-false} 00:07:09.874 }, 00:07:09.874 "method": "bdev_nvme_attach_controller" 00:07:09.874 } 00:07:09.874 EOF 00:07:09.874 )") 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:09.874 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:09.874 "params": { 00:07:09.874 "name": "Nvme0", 00:07:09.874 "trtype": "tcp", 00:07:09.874 "traddr": "10.0.0.2", 00:07:09.874 "adrfam": "ipv4", 00:07:09.874 "trsvcid": "4420", 00:07:09.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.874 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:09.874 "hdgst": false, 00:07:09.874 "ddgst": false 00:07:09.874 }, 00:07:09.874 "method": "bdev_nvme_attach_controller" 00:07:09.874 }' 00:07:09.874 [2024-07-25 12:14:24.710932] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:07:09.874 [2024-07-25 12:14:24.711068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67148 ] 00:07:10.132 [2024-07-25 12:14:24.853431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.132 [2024-07-25 12:14:24.986171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.390 Running I/O for 10 seconds... 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:10.959 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:10.960 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:10.960 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:10.960 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.960 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.960 [2024-07-25 12:14:25.803776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.803995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1438310 is same with the state(5) to be set 00:07:10.960 [2024-07-25 12:14:25.804521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.960 [2024-07-25 12:14:25.804550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.960 [2024-07-25 12:14:25.804574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.960 [2024-07-25 12:14:25.804585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.960 [2024-07-25 12:14:25.804598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.960 [2024-07-25 12:14:25.804608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.960 [2024-07-25 12:14:25.804619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.960 [2024-07-25 12:14:25.804629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.960 [2024-07-25 12:14:25.804640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.960 [2024-07-25 12:14:25.804649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.960 [2024-07-25 12:14:25.804661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.960 [2024-07-25 12:14:25.804671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.960 [2024-07-25 12:14:25.804683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.960 [2024-07-25 12:14:25.804700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.960 [2024-07-25 12:14:25.804712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.804722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.804733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.804743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.804754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.804764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.804775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.804784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.804795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.804805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.804817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.804832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.804849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.804864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.804893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.804909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.804926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.804940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.804959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.804975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.804992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.961 [2024-07-25 12:14:25.805639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.961 [2024-07-25 12:14:25.805648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.805986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.805997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.962 [2024-07-25 12:14:25.806007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.806017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c49820 is same with the state(5) to be set 00:07:10.962 [2024-07-25 12:14:25.806094] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c49820 was disconnected and freed. reset controller. 00:07:10.962 [2024-07-25 12:14:25.806181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.962 [2024-07-25 12:14:25.806208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.806221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.962 [2024-07-25 12:14:25.806231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.806241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.962 [2024-07-25 12:14:25.806250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.806260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.962 [2024-07-25 12:14:25.806269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.806278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c49af0 is same with the state(5) to be set 00:07:10.962 [2024-07-25 12:14:25.807443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:10.962 task offset: 114688 on job bdev=Nvme0n1 fails 00:07:10.962 00:07:10.962 Latency(us) 00:07:10.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.962 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:10.962 Job: Nvme0n1 ended in about 0.63 seconds with error 00:07:10.962 Verification LBA range: start 0x0 length 0x400 00:07:10.962 Nvme0n1 : 0.63 1416.44 88.53 101.17 0.00 41064.22 6672.76 37415.10 00:07:10.962 =================================================================================================================== 00:07:10.962 Total : 1416.44 88.53 101.17 0.00 41064.22 6672.76 37415.10 00:07:10.962 [2024-07-25 12:14:25.809848] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.962 [2024-07-25 12:14:25.809877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c49af0 (9): Bad file descriptor 00:07:10.962 [2024-07-25 12:14:25.811174] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:10.962 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.962 [2024-07-25 12:14:25.811278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:10.962 [2024-07-25 12:14:25.811314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.962 [2024-07-25 12:14:25.811336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:10.962 [2024-07-25 12:14:25.811347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:10.962 [2024-07-25 12:14:25.811356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:10.962 [2024-07-25 12:14:25.811365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c49af0 00:07:10.962 [2024-07-25 12:14:25.811402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c49af0 (9): Bad file descriptor 00:07:10.962 [2024-07-25 12:14:25.811420] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:10.962 [2024-07-25 12:14:25.811429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:10.962 [2024-07-25 12:14:25.811440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:10.962 [2024-07-25 12:14:25.811456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:10.962 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:10.962 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.962 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.962 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.962 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:12.338 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 67148 00:07:12.338 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (67148) - No such process 00:07:12.338 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:12.339 { 00:07:12.339 "params": { 00:07:12.339 "name": "Nvme$subsystem", 00:07:12.339 "trtype": "$TEST_TRANSPORT", 00:07:12.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:12.339 "adrfam": "ipv4", 00:07:12.339 "trsvcid": "$NVMF_PORT", 00:07:12.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:12.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:12.339 "hdgst": ${hdgst:-false}, 00:07:12.339 "ddgst": ${ddgst:-false} 00:07:12.339 }, 00:07:12.339 "method": "bdev_nvme_attach_controller" 00:07:12.339 } 00:07:12.339 EOF 00:07:12.339 )") 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:12.339 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:12.339 "params": { 00:07:12.339 "name": "Nvme0", 00:07:12.339 "trtype": "tcp", 00:07:12.339 "traddr": "10.0.0.2", 00:07:12.339 "adrfam": "ipv4", 00:07:12.339 "trsvcid": "4420", 00:07:12.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:12.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:12.339 "hdgst": false, 00:07:12.339 "ddgst": false 00:07:12.339 }, 00:07:12.339 "method": "bdev_nvme_attach_controller" 00:07:12.339 }' 00:07:12.339 [2024-07-25 12:14:26.896439] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:07:12.339 [2024-07-25 12:14:26.896578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67198 ] 00:07:12.339 [2024-07-25 12:14:27.035724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.339 [2024-07-25 12:14:27.198725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.597 Running I/O for 1 seconds... 00:07:13.966 00:07:13.966 Latency(us) 00:07:13.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.966 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:13.966 Verification LBA range: start 0x0 length 0x400 00:07:13.966 Nvme0n1 : 1.02 1503.36 93.96 0.00 0.00 41730.76 5719.51 37415.10 00:07:13.966 =================================================================================================================== 00:07:13.966 Total : 1503.36 93.96 0.00 0.00 41730.76 5719.51 37415.10 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:13.966 rmmod nvme_tcp 00:07:13.966 rmmod nvme_fabrics 00:07:13.966 rmmod nvme_keyring 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 67075 ']' 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 67075 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 67075 ']' 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 67075 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67075 00:07:13.966 killing process with pid 67075 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67075' 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 67075 00:07:13.966 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 67075 00:07:14.224 [2024-07-25 12:14:29.019137] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:14.224 00:07:14.224 real 0m6.103s 00:07:14.224 user 0m24.058s 00:07:14.224 sys 0m1.436s 00:07:14.224 ************************************ 00:07:14.224 END TEST nvmf_host_management 00:07:14.224 ************************************ 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.224 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.483 12:14:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:14.483 12:14:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:14.483 12:14:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.483 12:14:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:14.483 ************************************ 00:07:14.483 START TEST nvmf_lvol 00:07:14.483 ************************************ 00:07:14.483 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:14.483 * Looking for test storage... 00:07:14.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:14.484 Cannot find device "nvmf_tgt_br" 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:14.484 Cannot find device "nvmf_tgt_br2" 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:14.484 Cannot find device "nvmf_tgt_br" 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:14.484 Cannot find device "nvmf_tgt_br2" 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:14.484 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:14.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:14.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:14.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:07:14.743 00:07:14.743 --- 10.0.0.2 ping statistics --- 00:07:14.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.743 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:14.743 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:14.743 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:07:14.743 00:07:14.743 --- 10.0.0.3 ping statistics --- 00:07:14.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.743 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:14.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:07:14.743 00:07:14.743 --- 10.0.0.1 ping statistics --- 00:07:14.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.743 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:14.743 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=67408 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 67408 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 67408 ']' 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.744 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.002 [2024-07-25 12:14:29.608173] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:07:15.002 [2024-07-25 12:14:29.608273] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.002 [2024-07-25 12:14:29.743421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.002 [2024-07-25 12:14:29.867417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.002 [2024-07-25 12:14:29.867657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.002 [2024-07-25 12:14:29.867794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.002 [2024-07-25 12:14:29.867848] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.002 [2024-07-25 12:14:29.867879] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.261 [2024-07-25 12:14:29.868103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.261 [2024-07-25 12:14:29.868366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.261 [2024-07-25 12:14:29.868373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.848 12:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.848 12:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:15.848 12:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:15.848 12:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.848 12:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.848 12:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.848 12:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:16.106 [2024-07-25 12:14:30.924658] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.106 12:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:16.672 12:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:16.672 12:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:16.930 12:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:16.930 12:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:17.187 12:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:17.445 12:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=932a9bf5-9fa2-4cca-bf2c-de16e9621042 00:07:17.445 12:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 932a9bf5-9fa2-4cca-bf2c-de16e9621042 lvol 20 00:07:17.703 12:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=63fdd8cf-db98-41ce-9f49-7458d367023a 00:07:17.703 12:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:17.961 12:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 63fdd8cf-db98-41ce-9f49-7458d367023a 00:07:18.219 12:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.477 [2024-07-25 12:14:33.133621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.477 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.741 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67561 00:07:18.741 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:18.741 12:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:19.674 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 63fdd8cf-db98-41ce-9f49-7458d367023a MY_SNAPSHOT 00:07:19.932 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d8d6aa73-f32c-4783-9463-432ce67a05f7 00:07:19.932 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 63fdd8cf-db98-41ce-9f49-7458d367023a 30 00:07:20.498 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d8d6aa73-f32c-4783-9463-432ce67a05f7 MY_CLONE 00:07:20.756 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d7ca0bd6-90fc-4b8c-9ab0-1778d2576575 00:07:20.756 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate d7ca0bd6-90fc-4b8c-9ab0-1778d2576575 00:07:21.322 12:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67561 00:07:29.455 Initializing NVMe Controllers 00:07:29.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:29.455 Controller IO queue size 128, less than required. 00:07:29.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:29.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:29.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:29.455 Initialization complete. Launching workers. 00:07:29.455 ======================================================== 00:07:29.455 Latency(us) 00:07:29.455 Device Information : IOPS MiB/s Average min max 00:07:29.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10588.18 41.36 12089.45 726.77 78632.86 00:07:29.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10387.89 40.58 12320.92 3576.71 76645.28 00:07:29.455 ======================================================== 00:07:29.455 Total : 20976.08 81.94 12204.08 726.77 78632.86 00:07:29.455 00:07:29.455 12:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:29.455 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 63fdd8cf-db98-41ce-9f49-7458d367023a 00:07:29.712 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 932a9bf5-9fa2-4cca-bf2c-de16e9621042 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.970 rmmod nvme_tcp 00:07:29.970 rmmod nvme_fabrics 00:07:29.970 rmmod nvme_keyring 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 67408 ']' 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 67408 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 67408 ']' 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 67408 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67408 00:07:29.970 killing process with pid 67408 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67408' 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 67408 00:07:29.970 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 67408 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:30.537 00:07:30.537 real 0m16.026s 00:07:30.537 user 1m6.941s 00:07:30.537 sys 0m3.944s 00:07:30.537 ************************************ 00:07:30.537 END TEST nvmf_lvol 00:07:30.537 ************************************ 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.537 ************************************ 00:07:30.537 START TEST nvmf_lvs_grow 00:07:30.537 ************************************ 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:30.537 * Looking for test storage... 00:07:30.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.537 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:30.538 Cannot find device "nvmf_tgt_br" 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:30.538 Cannot find device "nvmf_tgt_br2" 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:30.538 Cannot find device "nvmf_tgt_br" 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:30.538 Cannot find device "nvmf_tgt_br2" 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:30.538 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:30.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:30.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:30.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:07:30.796 00:07:30.796 --- 10.0.0.2 ping statistics --- 00:07:30.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.796 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:30.796 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:30.796 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:30.796 00:07:30.796 --- 10.0.0.3 ping statistics --- 00:07:30.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.796 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:30.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:30.796 00:07:30.796 --- 10.0.0.1 ping statistics --- 00:07:30.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.796 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:30.796 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=67929 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 67929 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 67929 ']' 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.054 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.054 [2024-07-25 12:14:45.724624] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:07:31.054 [2024-07-25 12:14:45.724723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.054 [2024-07-25 12:14:45.864001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.313 [2024-07-25 12:14:45.992617] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.313 [2024-07-25 12:14:45.992688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.313 [2024-07-25 12:14:45.992705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.313 [2024-07-25 12:14:45.992716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.313 [2024-07-25 12:14:45.992725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.313 [2024-07-25 12:14:45.992759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.249 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.249 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:32.249 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.249 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.249 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.249 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.249 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:32.249 [2024-07-25 12:14:47.056710] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.249 ************************************ 00:07:32.249 START TEST lvs_grow_clean 00:07:32.249 ************************************ 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:32.249 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:32.816 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:32.816 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:33.074 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:33.074 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:33.074 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:33.333 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:33.333 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:33.333 12:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ead35063-6c89-4f8d-8543-8412e28b3d33 lvol 150 00:07:33.591 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=04162b73-2939-4240-aa3f-524fb2b82388 00:07:33.591 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:33.591 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:33.849 [2024-07-25 12:14:48.520180] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:33.849 [2024-07-25 12:14:48.520265] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:33.849 true 00:07:33.849 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:33.849 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:34.107 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:34.107 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:34.365 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 04162b73-2939-4240-aa3f-524fb2b82388 00:07:34.623 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:34.881 [2024-07-25 12:14:49.548725] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.881 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.138 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:35.138 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68096 00:07:35.138 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.138 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68096 /var/tmp/bdevperf.sock 00:07:35.138 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 68096 ']' 00:07:35.138 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:35.138 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:35.138 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:35.138 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.138 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:35.138 [2024-07-25 12:14:49.902667] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:07:35.138 [2024-07-25 12:14:49.902756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68096 ] 00:07:35.396 [2024-07-25 12:14:50.037776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.396 [2024-07-25 12:14:50.159866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.331 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.331 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:36.331 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:36.331 Nvme0n1 00:07:36.331 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:36.589 [ 00:07:36.589 { 00:07:36.589 "aliases": [ 00:07:36.589 "04162b73-2939-4240-aa3f-524fb2b82388" 00:07:36.589 ], 00:07:36.589 "assigned_rate_limits": { 00:07:36.589 "r_mbytes_per_sec": 0, 00:07:36.589 "rw_ios_per_sec": 0, 00:07:36.589 "rw_mbytes_per_sec": 0, 00:07:36.589 "w_mbytes_per_sec": 0 00:07:36.589 }, 00:07:36.589 "block_size": 4096, 00:07:36.589 "claimed": false, 00:07:36.589 "driver_specific": { 00:07:36.589 "mp_policy": "active_passive", 00:07:36.589 "nvme": [ 00:07:36.589 { 00:07:36.589 "ctrlr_data": { 00:07:36.589 "ana_reporting": false, 00:07:36.589 "cntlid": 1, 00:07:36.589 "firmware_revision": "24.09", 00:07:36.589 "model_number": "SPDK bdev Controller", 00:07:36.589 "multi_ctrlr": true, 00:07:36.589 "oacs": { 00:07:36.589 "firmware": 0, 00:07:36.589 "format": 0, 00:07:36.589 "ns_manage": 0, 00:07:36.589 "security": 0 00:07:36.589 }, 00:07:36.589 "serial_number": "SPDK0", 00:07:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.589 "vendor_id": "0x8086" 00:07:36.589 }, 00:07:36.589 "ns_data": { 00:07:36.589 "can_share": true, 00:07:36.589 "id": 1 00:07:36.589 }, 00:07:36.589 "trid": { 00:07:36.589 "adrfam": "IPv4", 00:07:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.589 "traddr": "10.0.0.2", 00:07:36.589 "trsvcid": "4420", 00:07:36.589 "trtype": "TCP" 00:07:36.589 }, 00:07:36.589 "vs": { 00:07:36.589 "nvme_version": "1.3" 00:07:36.589 } 00:07:36.589 } 00:07:36.589 ] 00:07:36.589 }, 00:07:36.589 "memory_domains": [ 00:07:36.589 { 00:07:36.589 "dma_device_id": "system", 00:07:36.589 "dma_device_type": 1 00:07:36.589 } 00:07:36.589 ], 00:07:36.589 "name": "Nvme0n1", 00:07:36.589 "num_blocks": 38912, 00:07:36.589 "product_name": "NVMe disk", 00:07:36.589 "supported_io_types": { 00:07:36.589 "abort": true, 00:07:36.589 "compare": true, 00:07:36.589 "compare_and_write": true, 00:07:36.589 "copy": true, 00:07:36.589 "flush": true, 00:07:36.589 "get_zone_info": false, 00:07:36.589 "nvme_admin": true, 00:07:36.589 "nvme_io": true, 00:07:36.589 "nvme_io_md": false, 00:07:36.589 "nvme_iov_md": false, 00:07:36.589 "read": true, 00:07:36.589 "reset": true, 00:07:36.589 "seek_data": false, 00:07:36.589 "seek_hole": false, 00:07:36.589 "unmap": true, 00:07:36.589 "write": true, 00:07:36.589 "write_zeroes": true, 00:07:36.589 "zcopy": false, 00:07:36.589 "zone_append": false, 00:07:36.589 "zone_management": false 00:07:36.589 }, 00:07:36.589 "uuid": "04162b73-2939-4240-aa3f-524fb2b82388", 00:07:36.589 "zoned": false 00:07:36.589 } 00:07:36.589 ] 00:07:36.848 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68139 00:07:36.848 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:36.848 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:36.848 Running I/O for 10 seconds... 00:07:37.783 Latency(us) 00:07:37.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.783 Nvme0n1 : 1.00 8286.00 32.37 0.00 0.00 0.00 0.00 0.00 00:07:37.783 =================================================================================================================== 00:07:37.783 Total : 8286.00 32.37 0.00 0.00 0.00 0.00 0.00 00:07:37.783 00:07:38.717 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:38.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.717 Nvme0n1 : 2.00 8278.50 32.34 0.00 0.00 0.00 0.00 0.00 00:07:38.717 =================================================================================================================== 00:07:38.717 Total : 8278.50 32.34 0.00 0.00 0.00 0.00 0.00 00:07:38.717 00:07:38.974 true 00:07:38.974 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:38.974 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:39.540 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:39.540 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:39.540 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 68139 00:07:39.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.798 Nvme0n1 : 3.00 8269.67 32.30 0.00 0.00 0.00 0.00 0.00 00:07:39.798 =================================================================================================================== 00:07:39.798 Total : 8269.67 32.30 0.00 0.00 0.00 0.00 0.00 00:07:39.798 00:07:40.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.733 Nvme0n1 : 4.00 8244.50 32.21 0.00 0.00 0.00 0.00 0.00 00:07:40.733 =================================================================================================================== 00:07:40.733 Total : 8244.50 32.21 0.00 0.00 0.00 0.00 0.00 00:07:40.733 00:07:42.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.108 Nvme0n1 : 5.00 8229.60 32.15 0.00 0.00 0.00 0.00 0.00 00:07:42.108 =================================================================================================================== 00:07:42.108 Total : 8229.60 32.15 0.00 0.00 0.00 0.00 0.00 00:07:42.108 00:07:43.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.042 Nvme0n1 : 6.00 8215.33 32.09 0.00 0.00 0.00 0.00 0.00 00:07:43.042 =================================================================================================================== 00:07:43.042 Total : 8215.33 32.09 0.00 0.00 0.00 0.00 0.00 00:07:43.042 00:07:43.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.975 Nvme0n1 : 7.00 8213.00 32.08 0.00 0.00 0.00 0.00 0.00 00:07:43.975 =================================================================================================================== 00:07:43.975 Total : 8213.00 32.08 0.00 0.00 0.00 0.00 0.00 00:07:43.975 00:07:44.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.907 Nvme0n1 : 8.00 8186.38 31.98 0.00 0.00 0.00 0.00 0.00 00:07:44.907 =================================================================================================================== 00:07:44.907 Total : 8186.38 31.98 0.00 0.00 0.00 0.00 0.00 00:07:44.907 00:07:45.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.849 Nvme0n1 : 9.00 8167.67 31.90 0.00 0.00 0.00 0.00 0.00 00:07:45.849 =================================================================================================================== 00:07:45.849 Total : 8167.67 31.90 0.00 0.00 0.00 0.00 0.00 00:07:45.849 00:07:46.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.787 Nvme0n1 : 10.00 8153.70 31.85 0.00 0.00 0.00 0.00 0.00 00:07:46.787 =================================================================================================================== 00:07:46.787 Total : 8153.70 31.85 0.00 0.00 0.00 0.00 0.00 00:07:46.787 00:07:46.787 00:07:46.787 Latency(us) 00:07:46.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.787 Nvme0n1 : 10.01 8152.31 31.84 0.00 0.00 15689.93 7357.91 32887.16 00:07:46.787 =================================================================================================================== 00:07:46.787 Total : 8152.31 31.84 0.00 0.00 15689.93 7357.91 32887.16 00:07:46.787 0 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68096 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 68096 ']' 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 68096 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68096 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:46.787 killing process with pid 68096 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68096' 00:07:46.787 Received shutdown signal, test time was about 10.000000 seconds 00:07:46.787 00:07:46.787 Latency(us) 00:07:46.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.787 =================================================================================================================== 00:07:46.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 68096 00:07:46.787 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 68096 00:07:47.046 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.305 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:47.564 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:47.564 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:48.154 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:48.154 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:48.154 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:48.154 [2024-07-25 12:15:02.957005] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:48.154 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:48.154 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:48.154 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:48.154 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.154 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.154 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.154 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.155 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.155 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.155 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.155 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:48.155 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:48.413 2024/07/25 12:15:03 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ead35063-6c89-4f8d-8543-8412e28b3d33], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:07:48.413 request: 00:07:48.413 { 00:07:48.413 "method": "bdev_lvol_get_lvstores", 00:07:48.413 "params": { 00:07:48.413 "uuid": "ead35063-6c89-4f8d-8543-8412e28b3d33" 00:07:48.413 } 00:07:48.413 } 00:07:48.413 Got JSON-RPC error response 00:07:48.413 GoRPCClient: error on JSON-RPC call 00:07:48.670 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:48.670 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.670 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.670 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.670 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:48.927 aio_bdev 00:07:48.927 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 04162b73-2939-4240-aa3f-524fb2b82388 00:07:48.927 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=04162b73-2939-4240-aa3f-524fb2b82388 00:07:48.927 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:48.927 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:48.927 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:48.927 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:48.927 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:49.185 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 04162b73-2939-4240-aa3f-524fb2b82388 -t 2000 00:07:49.445 [ 00:07:49.445 { 00:07:49.445 "aliases": [ 00:07:49.445 "lvs/lvol" 00:07:49.445 ], 00:07:49.445 "assigned_rate_limits": { 00:07:49.445 "r_mbytes_per_sec": 0, 00:07:49.445 "rw_ios_per_sec": 0, 00:07:49.445 "rw_mbytes_per_sec": 0, 00:07:49.445 "w_mbytes_per_sec": 0 00:07:49.445 }, 00:07:49.445 "block_size": 4096, 00:07:49.445 "claimed": false, 00:07:49.445 "driver_specific": { 00:07:49.445 "lvol": { 00:07:49.445 "base_bdev": "aio_bdev", 00:07:49.445 "clone": false, 00:07:49.445 "esnap_clone": false, 00:07:49.445 "lvol_store_uuid": "ead35063-6c89-4f8d-8543-8412e28b3d33", 00:07:49.445 "num_allocated_clusters": 38, 00:07:49.445 "snapshot": false, 00:07:49.445 "thin_provision": false 00:07:49.445 } 00:07:49.445 }, 00:07:49.445 "name": "04162b73-2939-4240-aa3f-524fb2b82388", 00:07:49.445 "num_blocks": 38912, 00:07:49.445 "product_name": "Logical Volume", 00:07:49.445 "supported_io_types": { 00:07:49.445 "abort": false, 00:07:49.445 "compare": false, 00:07:49.445 "compare_and_write": false, 00:07:49.445 "copy": false, 00:07:49.445 "flush": false, 00:07:49.445 "get_zone_info": false, 00:07:49.445 "nvme_admin": false, 00:07:49.445 "nvme_io": false, 00:07:49.445 "nvme_io_md": false, 00:07:49.445 "nvme_iov_md": false, 00:07:49.445 "read": true, 00:07:49.445 "reset": true, 00:07:49.445 "seek_data": true, 00:07:49.445 "seek_hole": true, 00:07:49.445 "unmap": true, 00:07:49.445 "write": true, 00:07:49.445 "write_zeroes": true, 00:07:49.445 "zcopy": false, 00:07:49.445 "zone_append": false, 00:07:49.445 "zone_management": false 00:07:49.445 }, 00:07:49.445 "uuid": "04162b73-2939-4240-aa3f-524fb2b82388", 00:07:49.445 "zoned": false 00:07:49.445 } 00:07:49.445 ] 00:07:49.445 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:49.445 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:49.445 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:49.703 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:49.703 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:49.704 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:49.962 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:49.962 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 04162b73-2939-4240-aa3f-524fb2b82388 00:07:50.220 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ead35063-6c89-4f8d-8543-8412e28b3d33 00:07:50.479 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.737 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:51.303 ************************************ 00:07:51.303 END TEST lvs_grow_clean 00:07:51.303 ************************************ 00:07:51.303 00:07:51.303 real 0m18.770s 00:07:51.303 user 0m17.902s 00:07:51.303 sys 0m2.402s 00:07:51.303 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.303 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:51.303 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:51.303 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.303 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.303 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.303 ************************************ 00:07:51.303 START TEST lvs_grow_dirty 00:07:51.304 ************************************ 00:07:51.304 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:51.304 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:51.304 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:51.304 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:51.304 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:51.304 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:51.304 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:51.304 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:51.304 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:51.304 12:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:51.562 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:51.562 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:51.820 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4030926b-a78d-4570-9182-88662caaafb4 00:07:51.820 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:51.820 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:07:52.078 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:52.078 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:52.078 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4030926b-a78d-4570-9182-88662caaafb4 lvol 150 00:07:52.336 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=886d694d-8b36-429a-b3cb-0ca8bdde9376 00:07:52.336 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:52.336 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:52.593 [2024-07-25 12:15:07.272232] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:52.593 [2024-07-25 12:15:07.272329] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:52.593 true 00:07:52.593 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:07:52.593 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:52.850 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:52.850 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:53.107 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 886d694d-8b36-429a-b3cb-0ca8bdde9376 00:07:53.364 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.622 [2024-07-25 12:15:08.344957] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.622 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.880 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:53.881 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68545 00:07:53.881 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.881 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68545 /var/tmp/bdevperf.sock 00:07:53.881 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 68545 ']' 00:07:53.881 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.881 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.881 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.881 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.881 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:53.881 [2024-07-25 12:15:08.648584] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:07:53.881 [2024-07-25 12:15:08.648687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68545 ] 00:07:54.138 [2024-07-25 12:15:08.785599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.138 [2024-07-25 12:15:08.920685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.071 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.071 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:55.071 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:55.071 Nvme0n1 00:07:55.071 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:55.328 [ 00:07:55.328 { 00:07:55.329 "aliases": [ 00:07:55.329 "886d694d-8b36-429a-b3cb-0ca8bdde9376" 00:07:55.329 ], 00:07:55.329 "assigned_rate_limits": { 00:07:55.329 "r_mbytes_per_sec": 0, 00:07:55.329 "rw_ios_per_sec": 0, 00:07:55.329 "rw_mbytes_per_sec": 0, 00:07:55.329 "w_mbytes_per_sec": 0 00:07:55.329 }, 00:07:55.329 "block_size": 4096, 00:07:55.329 "claimed": false, 00:07:55.329 "driver_specific": { 00:07:55.329 "mp_policy": "active_passive", 00:07:55.329 "nvme": [ 00:07:55.329 { 00:07:55.329 "ctrlr_data": { 00:07:55.329 "ana_reporting": false, 00:07:55.329 "cntlid": 1, 00:07:55.329 "firmware_revision": "24.09", 00:07:55.329 "model_number": "SPDK bdev Controller", 00:07:55.329 "multi_ctrlr": true, 00:07:55.329 "oacs": { 00:07:55.329 "firmware": 0, 00:07:55.329 "format": 0, 00:07:55.329 "ns_manage": 0, 00:07:55.329 "security": 0 00:07:55.329 }, 00:07:55.329 "serial_number": "SPDK0", 00:07:55.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.329 "vendor_id": "0x8086" 00:07:55.329 }, 00:07:55.329 "ns_data": { 00:07:55.329 "can_share": true, 00:07:55.329 "id": 1 00:07:55.329 }, 00:07:55.329 "trid": { 00:07:55.329 "adrfam": "IPv4", 00:07:55.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.329 "traddr": "10.0.0.2", 00:07:55.329 "trsvcid": "4420", 00:07:55.329 "trtype": "TCP" 00:07:55.329 }, 00:07:55.329 "vs": { 00:07:55.329 "nvme_version": "1.3" 00:07:55.329 } 00:07:55.329 } 00:07:55.329 ] 00:07:55.329 }, 00:07:55.329 "memory_domains": [ 00:07:55.329 { 00:07:55.329 "dma_device_id": "system", 00:07:55.329 "dma_device_type": 1 00:07:55.329 } 00:07:55.329 ], 00:07:55.329 "name": "Nvme0n1", 00:07:55.329 "num_blocks": 38912, 00:07:55.329 "product_name": "NVMe disk", 00:07:55.329 "supported_io_types": { 00:07:55.329 "abort": true, 00:07:55.329 "compare": true, 00:07:55.329 "compare_and_write": true, 00:07:55.329 "copy": true, 00:07:55.329 "flush": true, 00:07:55.329 "get_zone_info": false, 00:07:55.329 "nvme_admin": true, 00:07:55.329 "nvme_io": true, 00:07:55.329 "nvme_io_md": false, 00:07:55.329 "nvme_iov_md": false, 00:07:55.329 "read": true, 00:07:55.329 "reset": true, 00:07:55.329 "seek_data": false, 00:07:55.329 "seek_hole": false, 00:07:55.329 "unmap": true, 00:07:55.329 "write": true, 00:07:55.329 "write_zeroes": true, 00:07:55.329 "zcopy": false, 00:07:55.329 "zone_append": false, 00:07:55.329 "zone_management": false 00:07:55.329 }, 00:07:55.329 "uuid": "886d694d-8b36-429a-b3cb-0ca8bdde9376", 00:07:55.329 "zoned": false 00:07:55.329 } 00:07:55.329 ] 00:07:55.329 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:55.329 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68597 00:07:55.329 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:55.587 Running I/O for 10 seconds... 00:07:56.522 Latency(us) 00:07:56.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.522 Nvme0n1 : 1.00 8515.00 33.26 0.00 0.00 0.00 0.00 0.00 00:07:56.522 =================================================================================================================== 00:07:56.522 Total : 8515.00 33.26 0.00 0.00 0.00 0.00 0.00 00:07:56.522 00:07:57.455 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4030926b-a78d-4570-9182-88662caaafb4 00:07:57.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.455 Nvme0n1 : 2.00 8492.00 33.17 0.00 0.00 0.00 0.00 0.00 00:07:57.455 =================================================================================================================== 00:07:57.455 Total : 8492.00 33.17 0.00 0.00 0.00 0.00 0.00 00:07:57.455 00:07:57.713 true 00:07:57.713 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:07:57.713 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:57.971 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:57.971 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:57.971 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68597 00:07:58.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.537 Nvme0n1 : 3.00 8416.33 32.88 0.00 0.00 0.00 0.00 0.00 00:07:58.538 =================================================================================================================== 00:07:58.538 Total : 8416.33 32.88 0.00 0.00 0.00 0.00 0.00 00:07:58.538 00:07:59.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.470 Nvme0n1 : 4.00 8344.25 32.59 0.00 0.00 0.00 0.00 0.00 00:07:59.470 =================================================================================================================== 00:07:59.470 Total : 8344.25 32.59 0.00 0.00 0.00 0.00 0.00 00:07:59.470 00:08:00.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.405 Nvme0n1 : 5.00 8278.80 32.34 0.00 0.00 0.00 0.00 0.00 00:08:00.405 =================================================================================================================== 00:08:00.405 Total : 8278.80 32.34 0.00 0.00 0.00 0.00 0.00 00:08:00.405 00:08:01.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.779 Nvme0n1 : 6.00 8066.83 31.51 0.00 0.00 0.00 0.00 0.00 00:08:01.779 =================================================================================================================== 00:08:01.779 Total : 8066.83 31.51 0.00 0.00 0.00 0.00 0.00 00:08:01.779 00:08:02.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.712 Nvme0n1 : 7.00 8023.00 31.34 0.00 0.00 0.00 0.00 0.00 00:08:02.713 =================================================================================================================== 00:08:02.713 Total : 8023.00 31.34 0.00 0.00 0.00 0.00 0.00 00:08:02.713 00:08:03.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.646 Nvme0n1 : 8.00 8012.75 31.30 0.00 0.00 0.00 0.00 0.00 00:08:03.646 =================================================================================================================== 00:08:03.646 Total : 8012.75 31.30 0.00 0.00 0.00 0.00 0.00 00:08:03.646 00:08:04.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.580 Nvme0n1 : 9.00 7991.67 31.22 0.00 0.00 0.00 0.00 0.00 00:08:04.580 =================================================================================================================== 00:08:04.580 Total : 7991.67 31.22 0.00 0.00 0.00 0.00 0.00 00:08:04.580 00:08:05.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.514 Nvme0n1 : 10.00 7986.50 31.20 0.00 0.00 0.00 0.00 0.00 00:08:05.514 =================================================================================================================== 00:08:05.514 Total : 7986.50 31.20 0.00 0.00 0.00 0.00 0.00 00:08:05.514 00:08:05.514 00:08:05.514 Latency(us) 00:08:05.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.514 Nvme0n1 : 10.00 7996.93 31.24 0.00 0.00 16001.56 7268.54 134408.38 00:08:05.514 =================================================================================================================== 00:08:05.514 Total : 7996.93 31.24 0.00 0.00 16001.56 7268.54 134408.38 00:08:05.514 0 00:08:05.514 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68545 00:08:05.514 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 68545 ']' 00:08:05.514 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 68545 00:08:05.514 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:05.515 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.515 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68545 00:08:05.515 killing process with pid 68545 00:08:05.515 Received shutdown signal, test time was about 10.000000 seconds 00:08:05.515 00:08:05.515 Latency(us) 00:08:05.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.515 =================================================================================================================== 00:08:05.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:05.515 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:05.515 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:05.515 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68545' 00:08:05.515 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 68545 00:08:05.515 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 68545 00:08:05.772 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.030 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:06.315 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:06.315 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 67929 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 67929 00:08:06.573 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 67929 Killed "${NVMF_APP[@]}" "$@" 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=68761 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 68761 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 68761 ']' 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.573 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.573 [2024-07-25 12:15:21.397969] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:06.573 [2024-07-25 12:15:21.398070] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.831 [2024-07-25 12:15:21.534869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.831 [2024-07-25 12:15:21.645778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.831 [2024-07-25 12:15:21.645837] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.831 [2024-07-25 12:15:21.645847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.831 [2024-07-25 12:15:21.645855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.831 [2024-07-25 12:15:21.645862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.831 [2024-07-25 12:15:21.645888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.764 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.764 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:07.764 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.764 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.764 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:07.765 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.765 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.023 [2024-07-25 12:15:22.730913] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:08.023 [2024-07-25 12:15:22.731261] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:08.023 [2024-07-25 12:15:22.731502] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:08.023 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:08.023 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 886d694d-8b36-429a-b3cb-0ca8bdde9376 00:08:08.023 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=886d694d-8b36-429a-b3cb-0ca8bdde9376 00:08:08.023 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.023 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:08.023 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.023 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.023 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.280 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 886d694d-8b36-429a-b3cb-0ca8bdde9376 -t 2000 00:08:08.537 [ 00:08:08.537 { 00:08:08.537 "aliases": [ 00:08:08.537 "lvs/lvol" 00:08:08.537 ], 00:08:08.537 "assigned_rate_limits": { 00:08:08.537 "r_mbytes_per_sec": 0, 00:08:08.537 "rw_ios_per_sec": 0, 00:08:08.537 "rw_mbytes_per_sec": 0, 00:08:08.537 "w_mbytes_per_sec": 0 00:08:08.537 }, 00:08:08.537 "block_size": 4096, 00:08:08.537 "claimed": false, 00:08:08.537 "driver_specific": { 00:08:08.537 "lvol": { 00:08:08.537 "base_bdev": "aio_bdev", 00:08:08.537 "clone": false, 00:08:08.537 "esnap_clone": false, 00:08:08.537 "lvol_store_uuid": "4030926b-a78d-4570-9182-88662caaafb4", 00:08:08.537 "num_allocated_clusters": 38, 00:08:08.538 "snapshot": false, 00:08:08.538 "thin_provision": false 00:08:08.538 } 00:08:08.538 }, 00:08:08.538 "name": "886d694d-8b36-429a-b3cb-0ca8bdde9376", 00:08:08.538 "num_blocks": 38912, 00:08:08.538 "product_name": "Logical Volume", 00:08:08.538 "supported_io_types": { 00:08:08.538 "abort": false, 00:08:08.538 "compare": false, 00:08:08.538 "compare_and_write": false, 00:08:08.538 "copy": false, 00:08:08.538 "flush": false, 00:08:08.538 "get_zone_info": false, 00:08:08.538 "nvme_admin": false, 00:08:08.538 "nvme_io": false, 00:08:08.538 "nvme_io_md": false, 00:08:08.538 "nvme_iov_md": false, 00:08:08.538 "read": true, 00:08:08.538 "reset": true, 00:08:08.538 "seek_data": true, 00:08:08.538 "seek_hole": true, 00:08:08.538 "unmap": true, 00:08:08.538 "write": true, 00:08:08.538 "write_zeroes": true, 00:08:08.538 "zcopy": false, 00:08:08.538 "zone_append": false, 00:08:08.538 "zone_management": false 00:08:08.538 }, 00:08:08.538 "uuid": "886d694d-8b36-429a-b3cb-0ca8bdde9376", 00:08:08.538 "zoned": false 00:08:08.538 } 00:08:08.538 ] 00:08:08.538 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:08.538 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:08:08.538 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:08.795 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:08.795 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:08:08.795 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:09.052 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:09.052 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.309 [2024-07-25 12:15:24.159865] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:09.567 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:08:09.825 2024/07/25 12:15:24 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:4030926b-a78d-4570-9182-88662caaafb4], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:09.825 request: 00:08:09.825 { 00:08:09.825 "method": "bdev_lvol_get_lvstores", 00:08:09.825 "params": { 00:08:09.825 "uuid": "4030926b-a78d-4570-9182-88662caaafb4" 00:08:09.825 } 00:08:09.825 } 00:08:09.825 Got JSON-RPC error response 00:08:09.825 GoRPCClient: error on JSON-RPC call 00:08:09.825 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:09.825 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.825 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:09.825 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.825 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.083 aio_bdev 00:08:10.083 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 886d694d-8b36-429a-b3cb-0ca8bdde9376 00:08:10.083 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=886d694d-8b36-429a-b3cb-0ca8bdde9376 00:08:10.083 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:10.083 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:10.083 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:10.083 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:10.083 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:10.341 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 886d694d-8b36-429a-b3cb-0ca8bdde9376 -t 2000 00:08:10.600 [ 00:08:10.600 { 00:08:10.600 "aliases": [ 00:08:10.600 "lvs/lvol" 00:08:10.600 ], 00:08:10.600 "assigned_rate_limits": { 00:08:10.600 "r_mbytes_per_sec": 0, 00:08:10.600 "rw_ios_per_sec": 0, 00:08:10.600 "rw_mbytes_per_sec": 0, 00:08:10.600 "w_mbytes_per_sec": 0 00:08:10.600 }, 00:08:10.600 "block_size": 4096, 00:08:10.600 "claimed": false, 00:08:10.600 "driver_specific": { 00:08:10.600 "lvol": { 00:08:10.600 "base_bdev": "aio_bdev", 00:08:10.600 "clone": false, 00:08:10.600 "esnap_clone": false, 00:08:10.600 "lvol_store_uuid": "4030926b-a78d-4570-9182-88662caaafb4", 00:08:10.600 "num_allocated_clusters": 38, 00:08:10.600 "snapshot": false, 00:08:10.600 "thin_provision": false 00:08:10.600 } 00:08:10.600 }, 00:08:10.600 "name": "886d694d-8b36-429a-b3cb-0ca8bdde9376", 00:08:10.600 "num_blocks": 38912, 00:08:10.600 "product_name": "Logical Volume", 00:08:10.600 "supported_io_types": { 00:08:10.600 "abort": false, 00:08:10.600 "compare": false, 00:08:10.600 "compare_and_write": false, 00:08:10.600 "copy": false, 00:08:10.600 "flush": false, 00:08:10.600 "get_zone_info": false, 00:08:10.600 "nvme_admin": false, 00:08:10.600 "nvme_io": false, 00:08:10.600 "nvme_io_md": false, 00:08:10.600 "nvme_iov_md": false, 00:08:10.600 "read": true, 00:08:10.600 "reset": true, 00:08:10.601 "seek_data": true, 00:08:10.601 "seek_hole": true, 00:08:10.601 "unmap": true, 00:08:10.601 "write": true, 00:08:10.601 "write_zeroes": true, 00:08:10.601 "zcopy": false, 00:08:10.601 "zone_append": false, 00:08:10.601 "zone_management": false 00:08:10.601 }, 00:08:10.601 "uuid": "886d694d-8b36-429a-b3cb-0ca8bdde9376", 00:08:10.601 "zoned": false 00:08:10.601 } 00:08:10.601 ] 00:08:10.601 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:10.601 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:08:10.601 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:10.858 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:10.858 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4030926b-a78d-4570-9182-88662caaafb4 00:08:10.858 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:11.116 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:11.116 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 886d694d-8b36-429a-b3cb-0ca8bdde9376 00:08:11.374 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4030926b-a78d-4570-9182-88662caaafb4 00:08:11.631 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.890 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:12.148 00:08:12.148 real 0m21.024s 00:08:12.148 user 0m44.048s 00:08:12.148 sys 0m7.898s 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:12.148 ************************************ 00:08:12.148 END TEST lvs_grow_dirty 00:08:12.148 ************************************ 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:12.148 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:12.148 nvmf_trace.0 00:08:12.406 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:12.406 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:12.406 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:12.406 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:12.406 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.406 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:12.406 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.406 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.406 rmmod nvme_tcp 00:08:12.406 rmmod nvme_fabrics 00:08:12.665 rmmod nvme_keyring 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 68761 ']' 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 68761 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 68761 ']' 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 68761 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68761 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.665 killing process with pid 68761 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68761' 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 68761 00:08:12.665 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 68761 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:12.924 00:08:12.924 real 0m42.396s 00:08:12.924 user 1m8.782s 00:08:12.924 sys 0m11.056s 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.924 ************************************ 00:08:12.924 END TEST nvmf_lvs_grow 00:08:12.924 ************************************ 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.924 ************************************ 00:08:12.924 START TEST nvmf_bdev_io_wait 00:08:12.924 ************************************ 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:12.924 * Looking for test storage... 00:08:12.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.924 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:12.925 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:13.184 Cannot find device "nvmf_tgt_br" 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:13.184 Cannot find device "nvmf_tgt_br2" 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:13.184 Cannot find device "nvmf_tgt_br" 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:13.184 Cannot find device "nvmf_tgt_br2" 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:13.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:13.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:13.184 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:13.184 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:13.184 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:13.184 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:13.184 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:13.184 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:13.184 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:13.184 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:13.184 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:13.443 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:13.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:08:13.444 00:08:13.444 --- 10.0.0.2 ping statistics --- 00:08:13.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.444 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:13.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:13.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:13.444 00:08:13.444 --- 10.0.0.3 ping statistics --- 00:08:13.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.444 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:13.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:08:13.444 00:08:13.444 --- 10.0.0.1 ping statistics --- 00:08:13.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.444 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=69179 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 69179 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 69179 ']' 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.444 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.444 [2024-07-25 12:15:28.211039] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:13.444 [2024-07-25 12:15:28.211154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.702 [2024-07-25 12:15:28.350335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.702 [2024-07-25 12:15:28.462698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.702 [2024-07-25 12:15:28.462995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.702 [2024-07-25 12:15:28.463156] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.702 [2024-07-25 12:15:28.463211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.702 [2024-07-25 12:15:28.463337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.702 [2024-07-25 12:15:28.463830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.702 [2024-07-25 12:15:28.463998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.702 [2024-07-25 12:15:28.464093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.702 [2024-07-25 12:15:28.464105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.637 [2024-07-25 12:15:29.310386] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.637 Malloc0 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:14.637 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.638 [2024-07-25 12:15:29.374807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=69243 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=69245 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:14.638 { 00:08:14.638 "params": { 00:08:14.638 "name": "Nvme$subsystem", 00:08:14.638 "trtype": "$TEST_TRANSPORT", 00:08:14.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.638 "adrfam": "ipv4", 00:08:14.638 "trsvcid": "$NVMF_PORT", 00:08:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.638 "hdgst": ${hdgst:-false}, 00:08:14.638 "ddgst": ${ddgst:-false} 00:08:14.638 }, 00:08:14.638 "method": "bdev_nvme_attach_controller" 00:08:14.638 } 00:08:14.638 EOF 00:08:14.638 )") 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=69248 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:14.638 { 00:08:14.638 "params": { 00:08:14.638 "name": "Nvme$subsystem", 00:08:14.638 "trtype": "$TEST_TRANSPORT", 00:08:14.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.638 "adrfam": "ipv4", 00:08:14.638 "trsvcid": "$NVMF_PORT", 00:08:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.638 "hdgst": ${hdgst:-false}, 00:08:14.638 "ddgst": ${ddgst:-false} 00:08:14.638 }, 00:08:14.638 "method": "bdev_nvme_attach_controller" 00:08:14.638 } 00:08:14.638 EOF 00:08:14.638 )") 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=69250 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:14.638 { 00:08:14.638 "params": { 00:08:14.638 "name": "Nvme$subsystem", 00:08:14.638 "trtype": "$TEST_TRANSPORT", 00:08:14.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.638 "adrfam": "ipv4", 00:08:14.638 "trsvcid": "$NVMF_PORT", 00:08:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.638 "hdgst": ${hdgst:-false}, 00:08:14.638 "ddgst": ${ddgst:-false} 00:08:14.638 }, 00:08:14.638 "method": "bdev_nvme_attach_controller" 00:08:14.638 } 00:08:14.638 EOF 00:08:14.638 )") 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:14.638 { 00:08:14.638 "params": { 00:08:14.638 "name": "Nvme$subsystem", 00:08:14.638 "trtype": "$TEST_TRANSPORT", 00:08:14.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.638 "adrfam": "ipv4", 00:08:14.638 "trsvcid": "$NVMF_PORT", 00:08:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.638 "hdgst": ${hdgst:-false}, 00:08:14.638 "ddgst": ${ddgst:-false} 00:08:14.638 }, 00:08:14.638 "method": "bdev_nvme_attach_controller" 00:08:14.638 } 00:08:14.638 EOF 00:08:14.638 )") 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:14.638 "params": { 00:08:14.638 "name": "Nvme1", 00:08:14.638 "trtype": "tcp", 00:08:14.638 "traddr": "10.0.0.2", 00:08:14.638 "adrfam": "ipv4", 00:08:14.638 "trsvcid": "4420", 00:08:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.638 "hdgst": false, 00:08:14.638 "ddgst": false 00:08:14.638 }, 00:08:14.638 "method": "bdev_nvme_attach_controller" 00:08:14.638 }' 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:14.638 "params": { 00:08:14.638 "name": "Nvme1", 00:08:14.638 "trtype": "tcp", 00:08:14.638 "traddr": "10.0.0.2", 00:08:14.638 "adrfam": "ipv4", 00:08:14.638 "trsvcid": "4420", 00:08:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.638 "hdgst": false, 00:08:14.638 "ddgst": false 00:08:14.638 }, 00:08:14.638 "method": "bdev_nvme_attach_controller" 00:08:14.638 }' 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:14.638 "params": { 00:08:14.638 "name": "Nvme1", 00:08:14.638 "trtype": "tcp", 00:08:14.638 "traddr": "10.0.0.2", 00:08:14.638 "adrfam": "ipv4", 00:08:14.638 "trsvcid": "4420", 00:08:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.638 "hdgst": false, 00:08:14.638 "ddgst": false 00:08:14.638 }, 00:08:14.638 "method": "bdev_nvme_attach_controller" 00:08:14.638 }' 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:14.638 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:14.638 "params": { 00:08:14.638 "name": "Nvme1", 00:08:14.638 "trtype": "tcp", 00:08:14.638 "traddr": "10.0.0.2", 00:08:14.638 "adrfam": "ipv4", 00:08:14.638 "trsvcid": "4420", 00:08:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.638 "hdgst": false, 00:08:14.638 "ddgst": false 00:08:14.638 }, 00:08:14.638 "method": "bdev_nvme_attach_controller" 00:08:14.638 }' 00:08:14.638 [2024-07-25 12:15:29.445504] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:14.639 [2024-07-25 12:15:29.446194] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:14.639 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 69243 00:08:14.639 [2024-07-25 12:15:29.462692] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:14.639 [2024-07-25 12:15:29.463364] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:14.639 [2024-07-25 12:15:29.466471] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:14.639 [2024-07-25 12:15:29.466555] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:14.897 [2024-07-25 12:15:29.500343] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:14.897 [2024-07-25 12:15:29.500471] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:14.897 [2024-07-25 12:15:29.665835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.897 [2024-07-25 12:15:29.743985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.897 [2024-07-25 12:15:29.756672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:15.156 [2024-07-25 12:15:29.818449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.156 [2024-07-25 12:15:29.836742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:15.156 [2024-07-25 12:15:29.893192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.156 Running I/O for 1 seconds... 00:08:15.156 [2024-07-25 12:15:29.917965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:15.156 Running I/O for 1 seconds... 00:08:15.156 [2024-07-25 12:15:30.013864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:15.414 Running I/O for 1 seconds... 00:08:15.414 Running I/O for 1 seconds... 00:08:16.350 00:08:16.350 Latency(us) 00:08:16.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.350 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:16.350 Nvme1n1 : 1.00 191423.11 747.75 0.00 0.00 666.08 281.13 916.01 00:08:16.350 =================================================================================================================== 00:08:16.350 Total : 191423.11 747.75 0.00 0.00 666.08 281.13 916.01 00:08:16.350 00:08:16.350 Latency(us) 00:08:16.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.350 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:16.350 Nvme1n1 : 1.01 10416.07 40.69 0.00 0.00 12237.39 6523.81 17277.67 00:08:16.350 =================================================================================================================== 00:08:16.350 Total : 10416.07 40.69 0.00 0.00 12237.39 6523.81 17277.67 00:08:16.350 00:08:16.350 Latency(us) 00:08:16.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.350 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:16.350 Nvme1n1 : 1.02 5442.12 21.26 0.00 0.00 23326.07 9711.24 34317.03 00:08:16.350 =================================================================================================================== 00:08:16.350 Total : 5442.12 21.26 0.00 0.00 23326.07 9711.24 34317.03 00:08:16.350 00:08:16.350 Latency(us) 00:08:16.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.350 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:16.350 Nvme1n1 : 1.01 6935.19 27.09 0.00 0.00 18367.30 7745.16 27167.65 00:08:16.350 =================================================================================================================== 00:08:16.350 Total : 6935.19 27.09 0.00 0.00 18367.30 7745.16 27167.65 00:08:16.608 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 69245 00:08:16.608 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 69248 00:08:16.608 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 69250 00:08:16.608 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.608 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.609 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.609 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:16.609 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:16.609 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.609 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:16.609 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.609 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:16.609 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.609 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.609 rmmod nvme_tcp 00:08:16.609 rmmod nvme_fabrics 00:08:16.866 rmmod nvme_keyring 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 69179 ']' 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 69179 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 69179 ']' 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 69179 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69179 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.867 killing process with pid 69179 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69179' 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 69179 00:08:16.867 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 69179 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:17.126 00:08:17.126 real 0m4.121s 00:08:17.126 user 0m18.189s 00:08:17.126 sys 0m2.043s 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.126 ************************************ 00:08:17.126 END TEST nvmf_bdev_io_wait 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.126 ************************************ 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.126 ************************************ 00:08:17.126 START TEST nvmf_queue_depth 00:08:17.126 ************************************ 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:17.126 * Looking for test storage... 00:08:17.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:17.126 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:17.127 Cannot find device "nvmf_tgt_br" 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:17.127 Cannot find device "nvmf_tgt_br2" 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:17.127 Cannot find device "nvmf_tgt_br" 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:17.127 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:17.386 Cannot find device "nvmf_tgt_br2" 00:08:17.386 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:17.386 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:17.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:17.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:17.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:08:17.386 00:08:17.386 --- 10.0.0.2 ping statistics --- 00:08:17.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.386 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:17.386 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:17.386 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:17.386 00:08:17.386 --- 10.0.0.3 ping statistics --- 00:08:17.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.386 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:17.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:08:17.386 00:08:17.386 --- 10.0.0.1 ping statistics --- 00:08:17.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.386 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=69472 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 69472 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 69472 ']' 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.386 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.645 [2024-07-25 12:15:32.282211] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:17.645 [2024-07-25 12:15:32.282336] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.645 [2024-07-25 12:15:32.420462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.903 [2024-07-25 12:15:32.542423] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.903 [2024-07-25 12:15:32.542494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.903 [2024-07-25 12:15:32.542514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.903 [2024-07-25 12:15:32.542527] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.903 [2024-07-25 12:15:32.542536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.903 [2024-07-25 12:15:32.542569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 [2024-07-25 12:15:33.247339] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 Malloc0 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 [2024-07-25 12:15:33.312542] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=69522 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 69522 /var/tmp/bdevperf.sock 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 69522 ']' 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.469 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.727 [2024-07-25 12:15:33.372149] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:18.727 [2024-07-25 12:15:33.372248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69522 ] 00:08:18.727 [2024-07-25 12:15:33.514813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.984 [2024-07-25 12:15:33.635725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.551 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.551 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:19.551 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:19.551 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.551 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.551 NVMe0n1 00:08:19.551 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.551 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:19.812 Running I/O for 10 seconds... 00:08:29.811 00:08:29.811 Latency(us) 00:08:29.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.811 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:29.811 Verification LBA range: start 0x0 length 0x4000 00:08:29.811 NVMe0n1 : 10.09 9230.77 36.06 0.00 0.00 110465.33 27405.96 74830.20 00:08:29.811 =================================================================================================================== 00:08:29.811 Total : 9230.77 36.06 0.00 0.00 110465.33 27405.96 74830.20 00:08:29.811 0 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 69522 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 69522 ']' 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 69522 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69522 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69522' 00:08:29.811 killing process with pid 69522 00:08:29.811 Received shutdown signal, test time was about 10.000000 seconds 00:08:29.811 00:08:29.811 Latency(us) 00:08:29.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.811 =================================================================================================================== 00:08:29.811 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 69522 00:08:29.811 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 69522 00:08:30.068 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:30.068 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:30.068 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.068 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:30.068 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.068 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:30.068 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.068 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.068 rmmod nvme_tcp 00:08:30.325 rmmod nvme_fabrics 00:08:30.325 rmmod nvme_keyring 00:08:30.325 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.325 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:30.325 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:30.325 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 69472 ']' 00:08:30.325 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 69472 00:08:30.325 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 69472 ']' 00:08:30.325 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 69472 00:08:30.325 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:30.325 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.325 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69472 00:08:30.325 killing process with pid 69472 00:08:30.325 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:30.325 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:30.325 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69472' 00:08:30.325 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 69472 00:08:30.325 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 69472 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:30.582 00:08:30.582 real 0m13.463s 00:08:30.582 user 0m23.334s 00:08:30.582 sys 0m2.022s 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.582 ************************************ 00:08:30.582 END TEST nvmf_queue_depth 00:08:30.582 ************************************ 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.582 ************************************ 00:08:30.582 START TEST nvmf_target_multipath 00:08:30.582 ************************************ 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:30.582 * Looking for test storage... 00:08:30.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.582 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:30.583 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:30.840 Cannot find device "nvmf_tgt_br" 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.840 Cannot find device "nvmf_tgt_br2" 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:30.840 Cannot find device "nvmf_tgt_br" 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:30.840 Cannot find device "nvmf_tgt_br2" 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.840 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.096 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:31.096 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:31.096 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.096 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.096 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.096 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.096 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.096 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:31.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:08:31.096 00:08:31.096 --- 10.0.0.2 ping statistics --- 00:08:31.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.096 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:31.096 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:31.096 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.096 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:31.096 00:08:31.096 --- 10.0.0.3 ping statistics --- 00:08:31.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.097 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:31.097 00:08:31.097 --- 10.0.0.1 ping statistics --- 00:08:31.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.097 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=69859 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 69859 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 69859 ']' 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.097 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.097 [2024-07-25 12:15:45.855690] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:31.097 [2024-07-25 12:15:45.855764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.353 [2024-07-25 12:15:45.985957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.353 [2024-07-25 12:15:46.100318] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.353 [2024-07-25 12:15:46.100798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.353 [2024-07-25 12:15:46.100888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.353 [2024-07-25 12:15:46.100975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.353 [2024-07-25 12:15:46.101040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.353 [2024-07-25 12:15:46.101220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.353 [2024-07-25 12:15:46.101361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.353 [2024-07-25 12:15:46.102040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.353 [2024-07-25 12:15:46.102096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.322 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.322 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:08:32.322 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.322 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.322 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:32.322 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.322 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.322 [2024-07-25 12:15:47.138132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.322 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:32.581 Malloc0 00:08:32.839 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:32.839 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.097 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.355 [2024-07-25 12:15:48.168423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.356 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:33.614 [2024-07-25 12:15:48.412620] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:33.614 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:33.873 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:34.131 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:34.131 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:34.131 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:34.131 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:34.131 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=70002 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:36.167 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:36.167 [global] 00:08:36.167 thread=1 00:08:36.167 invalidate=1 00:08:36.167 rw=randrw 00:08:36.167 time_based=1 00:08:36.167 runtime=6 00:08:36.167 ioengine=libaio 00:08:36.167 direct=1 00:08:36.167 bs=4096 00:08:36.167 iodepth=128 00:08:36.167 norandommap=0 00:08:36.167 numjobs=1 00:08:36.167 00:08:36.167 verify_dump=1 00:08:36.167 verify_backlog=512 00:08:36.167 verify_state_save=0 00:08:36.167 do_verify=1 00:08:36.167 verify=crc32c-intel 00:08:36.167 [job0] 00:08:36.167 filename=/dev/nvme0n1 00:08:36.167 Could not set queue depth (nvme0n1) 00:08:36.424 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.425 fio-3.35 00:08:36.425 Starting 1 thread 00:08:37.356 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:37.356 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:37.614 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:38.575 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:38.575 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:38.575 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:38.575 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:38.832 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:39.398 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:40.331 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:40.331 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:40.331 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:40.331 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 70002 00:08:42.863 00:08:42.863 job0: (groupid=0, jobs=1): err= 0: pid=70023: Thu Jul 25 12:15:57 2024 00:08:42.863 read: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(261MiB/6005msec) 00:08:42.863 slat (usec): min=2, max=10211, avg=51.53, stdev=234.61 00:08:42.863 clat (usec): min=587, max=19730, avg=7867.40, stdev=1249.95 00:08:42.863 lat (usec): min=605, max=19749, avg=7918.93, stdev=1259.27 00:08:42.863 clat percentiles (usec): 00:08:42.863 | 1.00th=[ 4752], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7111], 00:08:42.863 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7963], 00:08:42.863 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9896], 00:08:42.863 | 99.00th=[11600], 99.50th=[12125], 99.90th=[17171], 99.95th=[17957], 00:08:42.863 | 99.99th=[19792] 00:08:42.863 bw ( KiB/s): min= 8224, max=30000, per=51.93%, avg=23116.45, stdev=5810.04, samples=11 00:08:42.863 iops : min= 2056, max= 7500, avg=5779.09, stdev=1452.51, samples=11 00:08:42.863 write: IOPS=6468, BW=25.3MiB/s (26.5MB/s)(137MiB/5418msec); 0 zone resets 00:08:42.863 slat (usec): min=3, max=5706, avg=62.75, stdev=163.94 00:08:42.863 clat (usec): min=324, max=13175, avg=6728.55, stdev=980.33 00:08:42.863 lat (usec): min=369, max=13199, avg=6791.30, stdev=983.42 00:08:42.863 clat percentiles (usec): 00:08:42.863 | 1.00th=[ 3818], 5.00th=[ 4883], 10.00th=[ 5735], 20.00th=[ 6194], 00:08:42.863 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6980], 00:08:42.863 | 70.00th=[ 7111], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 7898], 00:08:42.863 | 99.00th=[10028], 99.50th=[10552], 99.90th=[12125], 99.95th=[12387], 00:08:42.863 | 99.99th=[12911] 00:08:42.863 bw ( KiB/s): min= 8928, max=29536, per=89.42%, avg=23138.73, stdev=5408.29, samples=11 00:08:42.863 iops : min= 2232, max= 7384, avg=5784.64, stdev=1352.06, samples=11 00:08:42.863 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:42.863 lat (msec) : 2=0.01%, 4=0.68%, 10=95.86%, 20=3.45% 00:08:42.863 cpu : usr=5.76%, sys=22.76%, ctx=6443, majf=0, minf=96 00:08:42.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:42.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.863 issued rwts: total=66828,35049,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.863 00:08:42.863 Run status group 0 (all jobs): 00:08:42.863 READ: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=261MiB (274MB), run=6005-6005msec 00:08:42.863 WRITE: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=137MiB (144MB), run=5418-5418msec 00:08:42.863 00:08:42.863 Disk stats (read/write): 00:08:42.863 nvme0n1: ios=65872/34388, merge=0/0, ticks=484387/216158, in_queue=700545, util=98.48% 00:08:42.863 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:42.863 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:08:43.122 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:44.056 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:44.056 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:44.057 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:44.057 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:44.057 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=70149 00:08:44.057 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:44.057 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:44.057 [global] 00:08:44.057 thread=1 00:08:44.057 invalidate=1 00:08:44.057 rw=randrw 00:08:44.057 time_based=1 00:08:44.057 runtime=6 00:08:44.057 ioengine=libaio 00:08:44.057 direct=1 00:08:44.057 bs=4096 00:08:44.057 iodepth=128 00:08:44.057 norandommap=0 00:08:44.057 numjobs=1 00:08:44.057 00:08:44.057 verify_dump=1 00:08:44.057 verify_backlog=512 00:08:44.057 verify_state_save=0 00:08:44.057 do_verify=1 00:08:44.057 verify=crc32c-intel 00:08:44.057 [job0] 00:08:44.057 filename=/dev/nvme0n1 00:08:44.057 Could not set queue depth (nvme0n1) 00:08:44.057 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:44.057 fio-3.35 00:08:44.057 Starting 1 thread 00:08:44.991 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:45.258 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:45.536 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:45.536 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:45.536 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:45.536 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:45.536 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:45.536 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:45.536 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:45.537 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:45.537 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:45.537 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:45.537 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:45.537 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:45.537 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:46.911 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:46.911 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:46.911 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:46.911 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:46.911 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:47.169 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:48.102 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:48.102 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:48.102 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:48.102 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 70149 00:08:50.631 00:08:50.631 job0: (groupid=0, jobs=1): err= 0: pid=70176: Thu Jul 25 12:16:05 2024 00:08:50.631 read: IOPS=12.4k, BW=48.5MiB/s (50.8MB/s)(291MiB/6002msec) 00:08:50.631 slat (usec): min=2, max=4915, avg=42.16, stdev=205.04 00:08:50.631 clat (usec): min=325, max=13678, avg=7198.77, stdev=1517.55 00:08:50.631 lat (usec): min=346, max=13687, avg=7240.93, stdev=1535.64 00:08:50.631 clat percentiles (usec): 00:08:50.631 | 1.00th=[ 3261], 5.00th=[ 4424], 10.00th=[ 5080], 20.00th=[ 5997], 00:08:50.631 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7504], 00:08:50.631 | 70.00th=[ 7767], 80.00th=[ 8291], 90.00th=[ 8848], 95.00th=[ 9372], 00:08:50.631 | 99.00th=[11207], 99.50th=[11600], 99.90th=[12387], 99.95th=[12911], 00:08:50.631 | 99.99th=[13304] 00:08:50.631 bw ( KiB/s): min= 4888, max=48728, per=52.78%, avg=26205.18, stdev=12682.78, samples=11 00:08:50.631 iops : min= 1222, max=12182, avg=6551.27, stdev=3170.66, samples=11 00:08:50.631 write: IOPS=7606, BW=29.7MiB/s (31.2MB/s)(152MiB/5114msec); 0 zone resets 00:08:50.631 slat (usec): min=3, max=2789, avg=49.79, stdev=123.32 00:08:50.631 clat (usec): min=393, max=12908, avg=5792.37, stdev=1566.84 00:08:50.631 lat (usec): min=453, max=12935, avg=5842.16, stdev=1581.29 00:08:50.631 clat percentiles (usec): 00:08:50.631 | 1.00th=[ 2507], 5.00th=[ 3163], 10.00th=[ 3556], 20.00th=[ 4178], 00:08:50.631 | 30.00th=[ 4752], 40.00th=[ 5669], 50.00th=[ 6259], 60.00th=[ 6587], 00:08:50.631 | 70.00th=[ 6849], 80.00th=[ 7111], 90.00th=[ 7439], 95.00th=[ 7701], 00:08:50.631 | 99.00th=[ 9241], 99.50th=[10028], 99.90th=[11600], 99.95th=[11863], 00:08:50.631 | 99.99th=[12387] 00:08:50.631 bw ( KiB/s): min= 5296, max=48200, per=86.14%, avg=26210.91, stdev=12523.58, samples=11 00:08:50.631 iops : min= 1324, max=12050, avg=6552.73, stdev=3130.89, samples=11 00:08:50.631 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.03% 00:08:50.631 lat (msec) : 2=0.25%, 4=7.18%, 10=90.32%, 20=2.18% 00:08:50.631 cpu : usr=6.53%, sys=25.56%, ctx=7524, majf=0, minf=177 00:08:50.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:50.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.631 issued rwts: total=74501,38901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.631 00:08:50.631 Run status group 0 (all jobs): 00:08:50.631 READ: bw=48.5MiB/s (50.8MB/s), 48.5MiB/s-48.5MiB/s (50.8MB/s-50.8MB/s), io=291MiB (305MB), run=6002-6002msec 00:08:50.631 WRITE: bw=29.7MiB/s (31.2MB/s), 29.7MiB/s-29.7MiB/s (31.2MB/s-31.2MB/s), io=152MiB (159MB), run=5114-5114msec 00:08:50.631 00:08:50.631 Disk stats (read/write): 00:08:50.631 nvme0n1: ios=73444/38420, merge=0/0, ticks=492428/204421, in_queue=696849, util=98.63% 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.631 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.631 rmmod nvme_tcp 00:08:50.631 rmmod nvme_fabrics 00:08:50.889 rmmod nvme_keyring 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 69859 ']' 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 69859 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 69859 ']' 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 69859 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69859 00:08:50.889 killing process with pid 69859 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69859' 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 69859 00:08:50.889 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 69859 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:51.147 ************************************ 00:08:51.147 END TEST nvmf_target_multipath 00:08:51.147 ************************************ 00:08:51.147 00:08:51.147 real 0m20.515s 00:08:51.147 user 1m20.640s 00:08:51.147 sys 0m6.501s 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.147 12:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.148 ************************************ 00:08:51.148 START TEST nvmf_zcopy 00:08:51.148 ************************************ 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.148 * Looking for test storage... 00:08:51.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.148 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.148 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.406 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:51.406 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:51.406 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:51.406 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:51.406 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:51.406 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:51.406 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.406 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.406 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:51.407 Cannot find device "nvmf_tgt_br" 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.407 Cannot find device "nvmf_tgt_br2" 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:51.407 Cannot find device "nvmf_tgt_br" 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:51.407 Cannot find device "nvmf_tgt_br2" 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:51.407 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:51.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:08:51.665 00:08:51.665 --- 10.0.0.2 ping statistics --- 00:08:51.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.665 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:51.665 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.665 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:51.665 00:08:51.665 --- 10.0.0.3 ping statistics --- 00:08:51.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.665 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:51.665 00:08:51.665 --- 10.0.0.1 ping statistics --- 00:08:51.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.665 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=70457 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 70457 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 70457 ']' 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.665 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.665 [2024-07-25 12:16:06.405778] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:51.665 [2024-07-25 12:16:06.405881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.923 [2024-07-25 12:16:06.544849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.923 [2024-07-25 12:16:06.673839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.923 [2024-07-25 12:16:06.673898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.923 [2024-07-25 12:16:06.673912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.923 [2024-07-25 12:16:06.673922] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.923 [2024-07-25 12:16:06.673931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.923 [2024-07-25 12:16:06.673963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.887 [2024-07-25 12:16:07.432428] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.887 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.888 [2024-07-25 12:16:07.448503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.888 malloc0 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:52.888 { 00:08:52.888 "params": { 00:08:52.888 "name": "Nvme$subsystem", 00:08:52.888 "trtype": "$TEST_TRANSPORT", 00:08:52.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.888 "adrfam": "ipv4", 00:08:52.888 "trsvcid": "$NVMF_PORT", 00:08:52.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.888 "hdgst": ${hdgst:-false}, 00:08:52.888 "ddgst": ${ddgst:-false} 00:08:52.888 }, 00:08:52.888 "method": "bdev_nvme_attach_controller" 00:08:52.888 } 00:08:52.888 EOF 00:08:52.888 )") 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:52.888 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:52.888 "params": { 00:08:52.888 "name": "Nvme1", 00:08:52.888 "trtype": "tcp", 00:08:52.888 "traddr": "10.0.0.2", 00:08:52.888 "adrfam": "ipv4", 00:08:52.888 "trsvcid": "4420", 00:08:52.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.888 "hdgst": false, 00:08:52.888 "ddgst": false 00:08:52.888 }, 00:08:52.888 "method": "bdev_nvme_attach_controller" 00:08:52.888 }' 00:08:52.888 [2024-07-25 12:16:07.542295] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:08:52.888 [2024-07-25 12:16:07.542389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70508 ] 00:08:52.888 [2024-07-25 12:16:07.682806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.146 [2024-07-25 12:16:07.818615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.406 Running I/O for 10 seconds... 00:09:03.408 00:09:03.408 Latency(us) 00:09:03.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.408 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:03.408 Verification LBA range: start 0x0 length 0x1000 00:09:03.408 Nvme1n1 : 10.01 6301.00 49.23 0.00 0.00 20248.34 2725.70 32887.16 00:09:03.408 =================================================================================================================== 00:09:03.409 Total : 6301.00 49.23 0.00 0.00 20248.34 2725.70 32887.16 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=70630 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.668 { 00:09:03.668 "params": { 00:09:03.668 "name": "Nvme$subsystem", 00:09:03.668 "trtype": "$TEST_TRANSPORT", 00:09:03.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.668 "adrfam": "ipv4", 00:09:03.668 "trsvcid": "$NVMF_PORT", 00:09:03.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.668 "hdgst": ${hdgst:-false}, 00:09:03.668 "ddgst": ${ddgst:-false} 00:09:03.668 }, 00:09:03.668 "method": "bdev_nvme_attach_controller" 00:09:03.668 } 00:09:03.668 EOF 00:09:03.668 )") 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:03.668 [2024-07-25 12:16:18.285287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.285341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:03.668 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.668 "params": { 00:09:03.668 "name": "Nvme1", 00:09:03.668 "trtype": "tcp", 00:09:03.668 "traddr": "10.0.0.2", 00:09:03.668 "adrfam": "ipv4", 00:09:03.668 "trsvcid": "4420", 00:09:03.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.668 "hdgst": false, 00:09:03.668 "ddgst": false 00:09:03.668 }, 00:09:03.668 "method": "bdev_nvme_attach_controller" 00:09:03.668 }' 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.297265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.297301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.309259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.309285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.320395] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:09:03.668 [2024-07-25 12:16:18.320478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70630 ] 00:09:03.668 [2024-07-25 12:16:18.321257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.321280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.333263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.333286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.345277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.345329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.357285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.357329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.369285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.369329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.381295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.381336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.393328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.393360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.405318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.405374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.417302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.417332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.429306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.429329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.441310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.441334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.668 [2024-07-25 12:16:18.453313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.668 [2024-07-25 12:16:18.453337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.668 [2024-07-25 12:16:18.453584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.668 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.669 [2024-07-25 12:16:18.465380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.669 [2024-07-25 12:16:18.465427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.669 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.669 [2024-07-25 12:16:18.477328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.669 [2024-07-25 12:16:18.477386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.669 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.669 [2024-07-25 12:16:18.489350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.669 [2024-07-25 12:16:18.489374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.669 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.669 [2024-07-25 12:16:18.501358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.669 [2024-07-25 12:16:18.501382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.669 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.669 [2024-07-25 12:16:18.513358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.669 [2024-07-25 12:16:18.513383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.669 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.669 [2024-07-25 12:16:18.525425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.669 [2024-07-25 12:16:18.525473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.669 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.926 [2024-07-25 12:16:18.537377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.926 [2024-07-25 12:16:18.537415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.926 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.926 [2024-07-25 12:16:18.549396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.926 [2024-07-25 12:16:18.549435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.926 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.926 [2024-07-25 12:16:18.561411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.926 [2024-07-25 12:16:18.561445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.926 [2024-07-25 12:16:18.563872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.926 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.926 [2024-07-25 12:16:18.573421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.926 [2024-07-25 12:16:18.573461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.926 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.926 [2024-07-25 12:16:18.585441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.926 [2024-07-25 12:16:18.585488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.597440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.597486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.609486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.609533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.621464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.621513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.633483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.633527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.645492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.645557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.657480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.657528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.669484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.669525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.681456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.681480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.693491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.693543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.705474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.705500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.717494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.717538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.729482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.729525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.741482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.741504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.753496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.753525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 Running I/O for 5 seconds... 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.771106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.771154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:03.927 [2024-07-25 12:16:18.785577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.927 [2024-07-25 12:16:18.785609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.927 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.801325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.801388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.819507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.819553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.834245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.834285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.844542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.844587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.859358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.859414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.874979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.875009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.885247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.885293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.900677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.900709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.918337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.918392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.934484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.934516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.949677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.949741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.964694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.964773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.980233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.980282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.185 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.185 [2024-07-25 12:16:18.990308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.185 [2024-07-25 12:16:18.990365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.186 2024/07/25 12:16:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.186 [2024-07-25 12:16:19.001733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.186 [2024-07-25 12:16:19.001765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.186 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.186 [2024-07-25 12:16:19.012603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.186 [2024-07-25 12:16:19.012633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.186 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.186 [2024-07-25 12:16:19.027113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.186 [2024-07-25 12:16:19.027160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.186 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.186 [2024-07-25 12:16:19.044761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.186 [2024-07-25 12:16:19.044794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.186 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.059762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.059809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.077021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.077054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.093469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.093499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.111281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.111325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.125931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.125985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.143160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.143209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.158141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.158190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.168181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.168214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.182507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.182539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.198766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.198799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.215653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.215718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.231862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.231894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.249202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.249235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.264690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.264721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.281647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.281689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.444 [2024-07-25 12:16:19.297979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.444 [2024-07-25 12:16:19.298031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.444 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.314741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.314786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.330348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.330392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.346213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.346260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.355781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.355816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.370335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.370383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.384701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.384755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.400651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.400705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.417714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.417765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.434531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.434582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.451461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.451511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.467452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.467502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.485152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.485194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.500075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.500125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.516227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.516281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.703 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.703 [2024-07-25 12:16:19.533232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.703 [2024-07-25 12:16:19.533270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.704 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.704 [2024-07-25 12:16:19.549009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.704 [2024-07-25 12:16:19.549047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.704 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.704 [2024-07-25 12:16:19.558892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.704 [2024-07-25 12:16:19.558943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.704 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.573650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.573687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.583928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.583978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.598851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.598904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.609201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.609250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.623400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.623450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.638327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.638359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.655687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.655735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.670164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.670221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.685213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.685251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.701361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.701430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.717840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.717892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.735060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.735112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.750752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.962 [2024-07-25 12:16:19.750804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.962 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.962 [2024-07-25 12:16:19.767101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.963 [2024-07-25 12:16:19.767151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.963 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.963 [2024-07-25 12:16:19.782861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.963 [2024-07-25 12:16:19.782913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.963 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.963 [2024-07-25 12:16:19.792758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.963 [2024-07-25 12:16:19.792808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.963 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.963 [2024-07-25 12:16:19.807391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.963 [2024-07-25 12:16:19.807442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.963 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:04.963 [2024-07-25 12:16:19.818868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.963 [2024-07-25 12:16:19.818919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.963 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.835870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.835923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.851303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.851363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.868299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.868380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.883677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.883726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.895418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.895466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.912800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.912851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.927332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.927381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.943105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.943157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.961547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.961615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.976993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.977029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:19.993457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:19.993493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:20.008592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:20.008628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:20.024672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:20.024737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:20.040977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:20.041025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:20.057766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:20.057816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.221 [2024-07-25 12:16:20.074789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.221 [2024-07-25 12:16:20.074855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.221 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.089924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.089975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.104146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.104198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.119322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.119413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.136011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.136073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.151369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.151421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.168007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.168057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.184130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.184181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.202267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.202331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.217587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.217655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.227855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.227909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.242442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.242493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.259283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.259345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.275494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.275544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.287418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.287454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.304732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.304766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.319629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.319663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.329835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.329869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.480 [2024-07-25 12:16:20.344393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.480 [2024-07-25 12:16:20.344431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.480 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.738 [2024-07-25 12:16:20.354706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.738 [2024-07-25 12:16:20.354741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.738 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.738 [2024-07-25 12:16:20.369458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.738 [2024-07-25 12:16:20.369492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.738 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.738 [2024-07-25 12:16:20.379736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.738 [2024-07-25 12:16:20.379772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.738 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.738 [2024-07-25 12:16:20.393947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.738 [2024-07-25 12:16:20.393985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.409281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.409363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.426232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.426313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.442041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.442112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.453037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.453074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.467817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.467860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.478597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.478652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.493231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.493280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.509593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.509652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.526195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.526257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.542977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.543052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.558499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.558554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.575456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.575497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.739 [2024-07-25 12:16:20.592165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.739 [2024-07-25 12:16:20.592221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.739 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.609181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.609287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.625310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.625389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.641064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.641107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.656651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.656737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.666860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.666913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.681593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.681652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.697550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.697605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.714944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.715002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.730972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.731055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.747178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.747235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.763406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.763453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.780022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.780092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.796402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.796447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.813698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.813745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.829314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.829387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.846664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.998 [2024-07-25 12:16:20.846718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.998 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:05.998 [2024-07-25 12:16:20.861833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.999 [2024-07-25 12:16:20.861881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:20.877162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:20.877226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:20.887140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:20.887182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:20.901347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:20.901387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:20.916716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:20.916794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:20.926631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:20.926682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:20.941314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:20.941357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:20.958443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:20.958488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:20.973837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:20.973882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:20.991524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:20.991576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:21.007112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:21.007170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:21.021785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:21.021861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:21.040252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:21.040337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:21.055116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:21.055173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:21.071648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:21.071691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.287 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.287 [2024-07-25 12:16:21.088090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.287 [2024-07-25 12:16:21.088142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.288 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.288 [2024-07-25 12:16:21.103942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.288 [2024-07-25 12:16:21.104000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.288 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.288 [2024-07-25 12:16:21.121434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.288 [2024-07-25 12:16:21.121474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.288 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.288 [2024-07-25 12:16:21.136498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.288 [2024-07-25 12:16:21.136533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.288 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.551 [2024-07-25 12:16:21.152896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.551 [2024-07-25 12:16:21.152976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.551 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.551 [2024-07-25 12:16:21.169651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.551 [2024-07-25 12:16:21.169727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.551 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.551 [2024-07-25 12:16:21.185045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.551 [2024-07-25 12:16:21.185088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.551 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.551 [2024-07-25 12:16:21.202452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.202494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.217574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.217616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.233998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.234056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.251254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.251322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.266999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.267037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.285116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.285158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.299271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.299352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.315581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.315619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.332201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.332249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.346651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.346705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.363030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.363074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.379269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.379350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.396034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.396078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.552 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.552 [2024-07-25 12:16:21.413044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.552 [2024-07-25 12:16:21.413092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.430214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.430271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.445091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.445144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.462053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.462115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.479014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.479061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.495446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.495490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.512168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.512214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.529283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.529341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.545431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.545472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.561708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.561754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.577109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.577162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.592797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.592842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.601946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.601987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.617969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.618026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.627500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.627547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.811 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.811 [2024-07-25 12:16:21.642502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.811 [2024-07-25 12:16:21.642545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.812 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.812 [2024-07-25 12:16:21.658744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.812 [2024-07-25 12:16:21.658785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.812 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:06.812 [2024-07-25 12:16:21.675175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.812 [2024-07-25 12:16:21.675216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.070 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.070 [2024-07-25 12:16:21.691535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.070 [2024-07-25 12:16:21.691587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.070 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.070 [2024-07-25 12:16:21.709162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.070 [2024-07-25 12:16:21.709202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.070 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.070 [2024-07-25 12:16:21.725935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.070 [2024-07-25 12:16:21.725990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.070 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.070 [2024-07-25 12:16:21.741608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.070 [2024-07-25 12:16:21.741669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.070 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.070 [2024-07-25 12:16:21.757732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.070 [2024-07-25 12:16:21.757768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.070 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.070 [2024-07-25 12:16:21.768655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.070 [2024-07-25 12:16:21.768705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.071 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.071 [2024-07-25 12:16:21.783474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.071 [2024-07-25 12:16:21.783510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.071 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.071 [2024-07-25 12:16:21.801160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.071 [2024-07-25 12:16:21.801198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.071 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.071 [2024-07-25 12:16:21.816448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.071 [2024-07-25 12:16:21.816499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.071 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.071 [2024-07-25 12:16:21.834179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.071 [2024-07-25 12:16:21.834247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.071 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.071 [2024-07-25 12:16:21.849849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.071 [2024-07-25 12:16:21.849886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.071 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.071 [2024-07-25 12:16:21.866329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.071 [2024-07-25 12:16:21.866378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.071 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.071 [2024-07-25 12:16:21.884892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.071 [2024-07-25 12:16:21.884946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.071 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.071 [2024-07-25 12:16:21.899274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.071 [2024-07-25 12:16:21.899337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.071 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.071 [2024-07-25 12:16:21.915913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.071 [2024-07-25 12:16:21.915955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.071 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.071 [2024-07-25 12:16:21.932432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.071 [2024-07-25 12:16:21.932475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:21.949362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:21.949417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:21.966526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:21.966589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:21.983711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:21.983760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:21.999348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:21.999409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.008562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.008612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.024043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.024084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.039700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.039734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.049908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.049945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.063549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.063601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.078712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.078750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.089177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.089217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.103895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.103934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.114483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.114532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.128727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.128764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.144319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.144359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.154780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.154832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.169243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.169304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.179727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.179768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.330 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.330 [2024-07-25 12:16:22.194114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.330 [2024-07-25 12:16:22.194156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.331 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.589 [2024-07-25 12:16:22.206170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.589 [2024-07-25 12:16:22.206207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.589 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.589 [2024-07-25 12:16:22.224014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.589 [2024-07-25 12:16:22.224057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.589 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.589 [2024-07-25 12:16:22.238916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.238960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.249560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.249600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.264331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.264369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.275202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.275247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.289640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.289685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.301559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.301599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.318235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.318277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.335748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.335791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.352275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.352322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.368651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.368734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.387755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.387809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.402251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.402305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.418738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.418777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.434253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.434306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.590 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.590 [2024-07-25 12:16:22.451845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.590 [2024-07-25 12:16:22.451897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.467428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.467465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.484990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.485027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.500219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.500260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.510765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.510802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.525012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.525050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.535166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.535210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.549768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.549805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.560189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.560251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.574467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.574502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.589776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.589828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.600026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.600075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.615025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.615064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.631069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.631124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.646778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.646839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.664529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.664587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.678849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.678915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.695730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.850 [2024-07-25 12:16:22.695800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.850 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:07.850 [2024-07-25 12:16:22.712590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.851 [2024-07-25 12:16:22.712660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.109 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.109 [2024-07-25 12:16:22.729571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.729637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.744983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.745031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.760748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.760808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.778654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.778724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.793421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.793485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.809874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.809942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.826563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.826627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.842178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.842247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.857775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.857836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.870364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.870433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.888016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.888082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.902600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.902671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.919017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.919085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.934919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.934982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.951386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.951456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.110 [2024-07-25 12:16:22.961845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.110 [2024-07-25 12:16:22.961903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.110 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.369 [2024-07-25 12:16:22.977034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.369 [2024-07-25 12:16:22.977089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.369 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.369 [2024-07-25 12:16:22.993799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.369 [2024-07-25 12:16:22.993861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.369 2024/07/25 12:16:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.369 [2024-07-25 12:16:23.008621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.369 [2024-07-25 12:16:23.008677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.022589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.022647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.038898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.038968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.055236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.055294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.070474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.070531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.086519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.086577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.103493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.103551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.118332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.118406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.134683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.134744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.151487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.151540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.168406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.168454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.184101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.184149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.201662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.201714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.217981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.218039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.370 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.370 [2024-07-25 12:16:23.232762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.370 [2024-07-25 12:16:23.232809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.639 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.639 [2024-07-25 12:16:23.248342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.639 [2024-07-25 12:16:23.248407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.639 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.639 [2024-07-25 12:16:23.258114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.639 [2024-07-25 12:16:23.258159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.639 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.639 [2024-07-25 12:16:23.273720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.639 [2024-07-25 12:16:23.273774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.639 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.639 [2024-07-25 12:16:23.292099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.639 [2024-07-25 12:16:23.292151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.639 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.639 [2024-07-25 12:16:23.307944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.639 [2024-07-25 12:16:23.307994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.639 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.639 [2024-07-25 12:16:23.325279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.639 [2024-07-25 12:16:23.325376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.639 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.639 [2024-07-25 12:16:23.341649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.639 [2024-07-25 12:16:23.341730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.639 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.639 [2024-07-25 12:16:23.357326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.639 [2024-07-25 12:16:23.357416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.639 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.639 [2024-07-25 12:16:23.372981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.639 [2024-07-25 12:16:23.373020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.639 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.639 [2024-07-25 12:16:23.388381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.640 [2024-07-25 12:16:23.388424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.640 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.640 [2024-07-25 12:16:23.404323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.640 [2024-07-25 12:16:23.404399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.640 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.640 [2024-07-25 12:16:23.422061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.640 [2024-07-25 12:16:23.422133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.640 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.640 [2024-07-25 12:16:23.437605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.640 [2024-07-25 12:16:23.437655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.640 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.640 [2024-07-25 12:16:23.447764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.640 [2024-07-25 12:16:23.447805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.640 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.640 [2024-07-25 12:16:23.462410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.640 [2024-07-25 12:16:23.462452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.640 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.640 [2024-07-25 12:16:23.478523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.640 [2024-07-25 12:16:23.478568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.640 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.640 [2024-07-25 12:16:23.494309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.640 [2024-07-25 12:16:23.494379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.640 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.902 [2024-07-25 12:16:23.509950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.902 [2024-07-25 12:16:23.509994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.528279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.528374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.543685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.543746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.554596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.554653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.569235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.569324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.579476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.579513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.594082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.594118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.603762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.603827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.619097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.619158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.634978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.635024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.645382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.645435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.659902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.659951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.677096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.677147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.692380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.692427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.708493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.708554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.725503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.725550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.741351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.741407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.751832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.751916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 [2024-07-25 12:16:23.764236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.903 [2024-07-25 12:16:23.764321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.903 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:08.903 00:09:08.903 Latency(us) 00:09:08.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.903 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:08.903 Nvme1n1 : 5.01 11692.22 91.35 0.00 0.00 10932.96 4557.73 22282.24 00:09:08.903 =================================================================================================================== 00:09:08.903 Total : 11692.22 91.35 0.00 0.00 10932.96 4557.73 22282.24 00:09:09.162 [2024-07-25 12:16:23.774413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.774458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.786422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.786467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.798413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.798453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.810452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.810494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.822416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.822467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.834448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.834485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.846427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.846464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.858426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.858464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.870444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.870485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.882443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.882482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.894466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.894504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.906447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.906488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.918448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.918488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.162 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.162 [2024-07-25 12:16:23.930471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.162 [2024-07-25 12:16:23.930511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.163 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.163 [2024-07-25 12:16:23.942455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.163 [2024-07-25 12:16:23.942494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.163 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.163 [2024-07-25 12:16:23.954471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.163 [2024-07-25 12:16:23.954510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.163 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.163 [2024-07-25 12:16:23.966464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.163 [2024-07-25 12:16:23.966503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.163 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.163 [2024-07-25 12:16:23.978471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.163 [2024-07-25 12:16:23.978508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.163 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.163 [2024-07-25 12:16:23.990491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.163 [2024-07-25 12:16:23.990533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.163 2024/07/25 12:16:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.163 [2024-07-25 12:16:24.002528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.163 [2024-07-25 12:16:24.002577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.163 2024/07/25 12:16:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.163 [2024-07-25 12:16:24.014472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.163 [2024-07-25 12:16:24.014510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.163 2024/07/25 12:16:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.163 [2024-07-25 12:16:24.026497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.163 [2024-07-25 12:16:24.026532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.421 2024/07/25 12:16:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.421 [2024-07-25 12:16:24.038562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.421 [2024-07-25 12:16:24.038611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.421 2024/07/25 12:16:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.421 [2024-07-25 12:16:24.050529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.421 [2024-07-25 12:16:24.050566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.421 2024/07/25 12:16:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.421 [2024-07-25 12:16:24.062487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.421 [2024-07-25 12:16:24.062516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.421 2024/07/25 12:16:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.421 [2024-07-25 12:16:24.074495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.421 [2024-07-25 12:16:24.074519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.421 2024/07/25 12:16:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:09.421 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70630) - No such process 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 70630 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.421 delay0 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.421 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:09.421 [2024-07-25 12:16:24.268017] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:15.973 Initializing NVMe Controllers 00:09:15.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:15.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:15.973 Initialization complete. Launching workers. 00:09:15.973 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 227 00:09:15.973 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 514, failed to submit 33 00:09:15.973 success 330, unsuccess 184, failed 0 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:15.973 rmmod nvme_tcp 00:09:15.973 rmmod nvme_fabrics 00:09:15.973 rmmod nvme_keyring 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 70457 ']' 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 70457 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 70457 ']' 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 70457 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70457 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:15.973 killing process with pid 70457 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70457' 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 70457 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 70457 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:15.973 ************************************ 00:09:15.973 END TEST nvmf_zcopy 00:09:15.973 ************************************ 00:09:15.973 00:09:15.973 real 0m24.842s 00:09:15.973 user 0m40.758s 00:09:15.973 sys 0m6.489s 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.973 ************************************ 00:09:15.973 START TEST nvmf_nmic 00:09:15.973 ************************************ 00:09:15.973 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:16.231 * Looking for test storage... 00:09:16.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.231 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:16.232 Cannot find device "nvmf_tgt_br" 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.232 Cannot find device "nvmf_tgt_br2" 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:16.232 Cannot find device "nvmf_tgt_br" 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:16.232 Cannot find device "nvmf_tgt_br2" 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:16.232 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:16.232 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:16.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:09:16.490 00:09:16.490 --- 10.0.0.2 ping statistics --- 00:09:16.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.490 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:16.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:16.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:09:16.490 00:09:16.490 --- 10.0.0.3 ping statistics --- 00:09:16.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.490 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:16.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:09:16.490 00:09:16.490 --- 10.0.0.1 ping statistics --- 00:09:16.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.490 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:16.490 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=70953 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 70953 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 70953 ']' 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.491 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.491 [2024-07-25 12:16:31.314779] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:09:16.491 [2024-07-25 12:16:31.315167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.748 [2024-07-25 12:16:31.458641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.748 [2024-07-25 12:16:31.592809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.748 [2024-07-25 12:16:31.593169] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.748 [2024-07-25 12:16:31.593396] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.748 [2024-07-25 12:16:31.593417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.748 [2024-07-25 12:16:31.593427] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.748 [2024-07-25 12:16:31.593516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.748 [2024-07-25 12:16:31.593789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.748 [2024-07-25 12:16:31.594094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.748 [2024-07-25 12:16:31.594102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.681 [2024-07-25 12:16:32.334855] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.681 Malloc0 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.681 [2024-07-25 12:16:32.407754] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.681 test case1: single bdev can't be used in multiple subsystems 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.681 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.681 [2024-07-25 12:16:32.435617] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:17.682 [2024-07-25 12:16:32.435674] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:17.682 [2024-07-25 12:16:32.435704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.682 2024/07/25 12:16:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.682 request: 00:09:17.682 { 00:09:17.682 "method": "nvmf_subsystem_add_ns", 00:09:17.682 "params": { 00:09:17.682 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:17.682 "namespace": { 00:09:17.682 "bdev_name": "Malloc0", 00:09:17.682 "no_auto_visible": false 00:09:17.682 } 00:09:17.682 } 00:09:17.682 } 00:09:17.682 Got JSON-RPC error response 00:09:17.682 GoRPCClient: error on JSON-RPC call 00:09:17.682 Adding namespace failed - expected result. 00:09:17.682 test case2: host connect to nvmf target in multiple paths 00:09:17.682 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:17.682 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:17.682 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:17.682 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:17.682 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:17.682 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:17.682 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.682 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.682 [2024-07-25 12:16:32.447756] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:17.682 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.682 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.939 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:17.939 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.939 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:17.939 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.939 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:17.939 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.467 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.467 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.467 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.467 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:20.467 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.467 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:20.467 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:20.467 [global] 00:09:20.467 thread=1 00:09:20.467 invalidate=1 00:09:20.467 rw=write 00:09:20.467 time_based=1 00:09:20.467 runtime=1 00:09:20.467 ioengine=libaio 00:09:20.467 direct=1 00:09:20.467 bs=4096 00:09:20.467 iodepth=1 00:09:20.467 norandommap=0 00:09:20.467 numjobs=1 00:09:20.467 00:09:20.467 verify_dump=1 00:09:20.467 verify_backlog=512 00:09:20.467 verify_state_save=0 00:09:20.467 do_verify=1 00:09:20.467 verify=crc32c-intel 00:09:20.467 [job0] 00:09:20.467 filename=/dev/nvme0n1 00:09:20.467 Could not set queue depth (nvme0n1) 00:09:20.467 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.467 fio-3.35 00:09:20.467 Starting 1 thread 00:09:21.401 00:09:21.401 job0: (groupid=0, jobs=1): err= 0: pid=71063: Thu Jul 25 12:16:36 2024 00:09:21.401 read: IOPS=3123, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:09:21.401 slat (nsec): min=14788, max=63804, avg=19188.62, stdev=5796.17 00:09:21.401 clat (usec): min=123, max=2382, avg=145.88, stdev=44.95 00:09:21.401 lat (usec): min=138, max=2410, avg=165.07, stdev=45.83 00:09:21.401 clat percentiles (usec): 00:09:21.401 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:09:21.401 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:09:21.401 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 167], 00:09:21.401 | 99.00th=[ 184], 99.50th=[ 206], 99.90th=[ 416], 99.95th=[ 783], 00:09:21.401 | 99.99th=[ 2376] 00:09:21.401 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:21.401 slat (usec): min=19, max=209, avg=26.69, stdev= 9.29 00:09:21.401 clat (usec): min=84, max=329, avg=104.34, stdev=12.90 00:09:21.401 lat (usec): min=104, max=452, avg=131.03, stdev=17.82 00:09:21.401 clat percentiles (usec): 00:09:21.401 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 97], 00:09:21.401 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 103], 00:09:21.401 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 117], 95.00th=[ 125], 00:09:21.401 | 99.00th=[ 145], 99.50th=[ 155], 99.90th=[ 258], 99.95th=[ 314], 00:09:21.401 | 99.99th=[ 330] 00:09:21.401 bw ( KiB/s): min=15440, max=15440, per=100.00%, avg=15440.00, stdev= 0.00, samples=1 00:09:21.401 iops : min= 3860, max= 3860, avg=3860.00, stdev= 0.00, samples=1 00:09:21.401 lat (usec) : 100=22.49%, 250=77.29%, 500=0.18%, 750=0.01%, 1000=0.01% 00:09:21.401 lat (msec) : 4=0.01% 00:09:21.401 cpu : usr=2.60%, sys=11.90%, ctx=6711, majf=0, minf=2 00:09:21.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.401 issued rwts: total=3127,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.401 00:09:21.401 Run status group 0 (all jobs): 00:09:21.401 READ: bw=12.2MiB/s (12.8MB/s), 12.2MiB/s-12.2MiB/s (12.8MB/s-12.8MB/s), io=12.2MiB (12.8MB), run=1001-1001msec 00:09:21.401 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:09:21.401 00:09:21.401 Disk stats (read/write): 00:09:21.401 nvme0n1: ios=2958/3072, merge=0/0, ticks=464/377, in_queue=841, util=91.18% 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:21.401 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:21.401 rmmod nvme_tcp 00:09:21.401 rmmod nvme_fabrics 00:09:21.659 rmmod nvme_keyring 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 70953 ']' 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 70953 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 70953 ']' 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 70953 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70953 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.659 killing process with pid 70953 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70953' 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 70953 00:09:21.659 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 70953 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:21.918 00:09:21.918 real 0m5.810s 00:09:21.918 user 0m19.425s 00:09:21.918 sys 0m1.424s 00:09:21.918 ************************************ 00:09:21.918 END TEST nvmf_nmic 00:09:21.918 ************************************ 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.918 ************************************ 00:09:21.918 START TEST nvmf_fio_target 00:09:21.918 ************************************ 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:21.918 * Looking for test storage... 00:09:21.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:21.918 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:21.919 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:22.177 Cannot find device "nvmf_tgt_br" 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.177 Cannot find device "nvmf_tgt_br2" 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:22.177 Cannot find device "nvmf_tgt_br" 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:22.177 Cannot find device "nvmf_tgt_br2" 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:22.177 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:22.177 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:22.177 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:22.177 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:22.177 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:22.177 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:22.177 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:22.177 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:22.177 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:22.177 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:22.177 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:22.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:09:22.436 00:09:22.436 --- 10.0.0.2 ping statistics --- 00:09:22.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.436 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:22.436 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:22.436 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:22.436 00:09:22.436 --- 10.0.0.3 ping statistics --- 00:09:22.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.436 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:22.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:09:22.436 00:09:22.436 --- 10.0.0.1 ping statistics --- 00:09:22.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.436 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=71243 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 71243 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 71243 ']' 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.436 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.436 [2024-07-25 12:16:37.172824] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:09:22.436 [2024-07-25 12:16:37.172955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.694 [2024-07-25 12:16:37.307610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.694 [2024-07-25 12:16:37.410906] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.694 [2024-07-25 12:16:37.411099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.694 [2024-07-25 12:16:37.411376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.694 [2024-07-25 12:16:37.411520] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.694 [2024-07-25 12:16:37.411604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.694 [2024-07-25 12:16:37.411880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.694 [2024-07-25 12:16:37.412000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.694 [2024-07-25 12:16:37.412077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.694 [2024-07-25 12:16:37.412131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.652 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.652 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:23.652 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.652 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:23.652 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.652 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.652 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:23.910 [2024-07-25 12:16:38.522033] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.910 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.169 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:24.169 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.427 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:24.427 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.686 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:24.686 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.945 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:24.945 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:25.203 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.461 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:25.461 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.720 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:25.720 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.978 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:25.978 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:26.237 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:26.495 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:26.495 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.753 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:26.753 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:27.011 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.270 [2024-07-25 12:16:42.023333] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.270 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:27.528 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:27.786 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.044 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:28.044 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:28.044 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.044 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:28.044 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:28.044 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:29.957 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:29.957 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:29.957 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.957 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:29.957 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.957 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:29.957 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:29.957 [global] 00:09:29.957 thread=1 00:09:29.957 invalidate=1 00:09:29.957 rw=write 00:09:29.957 time_based=1 00:09:29.957 runtime=1 00:09:29.957 ioengine=libaio 00:09:29.957 direct=1 00:09:29.957 bs=4096 00:09:29.957 iodepth=1 00:09:29.957 norandommap=0 00:09:29.957 numjobs=1 00:09:29.957 00:09:29.957 verify_dump=1 00:09:29.957 verify_backlog=512 00:09:29.957 verify_state_save=0 00:09:29.957 do_verify=1 00:09:29.957 verify=crc32c-intel 00:09:29.957 [job0] 00:09:29.957 filename=/dev/nvme0n1 00:09:29.957 [job1] 00:09:29.957 filename=/dev/nvme0n2 00:09:29.957 [job2] 00:09:29.957 filename=/dev/nvme0n3 00:09:29.957 [job3] 00:09:29.957 filename=/dev/nvme0n4 00:09:30.215 Could not set queue depth (nvme0n1) 00:09:30.215 Could not set queue depth (nvme0n2) 00:09:30.215 Could not set queue depth (nvme0n3) 00:09:30.215 Could not set queue depth (nvme0n4) 00:09:30.215 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.215 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.215 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.215 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.215 fio-3.35 00:09:30.215 Starting 4 threads 00:09:31.607 00:09:31.607 job0: (groupid=0, jobs=1): err= 0: pid=71546: Thu Jul 25 12:16:46 2024 00:09:31.607 read: IOPS=1835, BW=7341KiB/s (7517kB/s)(7348KiB/1001msec) 00:09:31.607 slat (nsec): min=12279, max=99233, avg=15598.01, stdev=4256.47 00:09:31.607 clat (usec): min=91, max=2151, avg=266.26, stdev=48.68 00:09:31.607 lat (usec): min=185, max=2164, avg=281.86, stdev=48.51 00:09:31.607 clat percentiles (usec): 00:09:31.607 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 255], 00:09:31.607 | 30.00th=[ 260], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:09:31.607 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:09:31.607 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 766], 99.95th=[ 2147], 00:09:31.607 | 99.99th=[ 2147] 00:09:31.607 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:31.607 slat (nsec): min=18970, max=95646, avg=27037.70, stdev=6165.12 00:09:31.607 clat (usec): min=122, max=260, avg=204.27, stdev=12.53 00:09:31.607 lat (usec): min=149, max=355, avg=231.30, stdev=11.29 00:09:31.607 clat percentiles (usec): 00:09:31.607 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:09:31.607 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:09:31.607 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 227], 00:09:31.607 | 99.00th=[ 235], 99.50th=[ 239], 99.90th=[ 247], 99.95th=[ 249], 00:09:31.607 | 99.99th=[ 262] 00:09:31.607 bw ( KiB/s): min= 8192, max= 8192, per=21.01%, avg=8192.00, stdev= 0.00, samples=1 00:09:31.607 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:31.607 lat (usec) : 100=0.03%, 250=56.47%, 500=43.42%, 750=0.03%, 1000=0.03% 00:09:31.607 lat (msec) : 4=0.03% 00:09:31.607 cpu : usr=1.90%, sys=6.70%, ctx=3890, majf=0, minf=5 00:09:31.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.607 issued rwts: total=1837,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.607 job1: (groupid=0, jobs=1): err= 0: pid=71548: Thu Jul 25 12:16:46 2024 00:09:31.607 read: IOPS=1835, BW=7341KiB/s (7517kB/s)(7348KiB/1001msec) 00:09:31.607 slat (nsec): min=12095, max=37637, avg=17052.84, stdev=2907.40 00:09:31.607 clat (usec): min=146, max=2157, avg=264.89, stdev=48.88 00:09:31.607 lat (usec): min=164, max=2173, avg=281.94, stdev=48.92 00:09:31.607 clat percentiles (usec): 00:09:31.607 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 253], 00:09:31.607 | 30.00th=[ 258], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 265], 00:09:31.607 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 285], 00:09:31.607 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 766], 99.95th=[ 2147], 00:09:31.607 | 99.99th=[ 2147] 00:09:31.607 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:31.607 slat (nsec): min=16829, max=74577, avg=26858.56, stdev=5930.30 00:09:31.607 clat (usec): min=147, max=244, avg=204.49, stdev=11.90 00:09:31.607 lat (usec): min=198, max=268, avg=231.35, stdev=10.35 00:09:31.607 clat percentiles (usec): 00:09:31.607 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:09:31.607 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:09:31.607 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 225], 00:09:31.607 | 99.00th=[ 235], 99.50th=[ 237], 99.90th=[ 239], 99.95th=[ 239], 00:09:31.607 | 99.99th=[ 245] 00:09:31.607 bw ( KiB/s): min= 8192, max= 8192, per=21.01%, avg=8192.00, stdev= 0.00, samples=1 00:09:31.607 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:31.607 lat (usec) : 250=57.55%, 500=42.37%, 1000=0.05% 00:09:31.607 lat (msec) : 4=0.03% 00:09:31.607 cpu : usr=1.70%, sys=6.50%, ctx=3887, majf=0, minf=5 00:09:31.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.607 issued rwts: total=1837,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.607 job2: (groupid=0, jobs=1): err= 0: pid=71549: Thu Jul 25 12:16:46 2024 00:09:31.607 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:31.607 slat (nsec): min=15173, max=57254, avg=18370.54, stdev=3682.78 00:09:31.607 clat (usec): min=171, max=488, avg=193.42, stdev=12.67 00:09:31.607 lat (usec): min=187, max=504, avg=211.79, stdev=13.25 00:09:31.607 clat percentiles (usec): 00:09:31.607 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:09:31.607 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 194], 00:09:31.607 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 206], 95.00th=[ 212], 00:09:31.607 | 99.00th=[ 223], 99.50th=[ 225], 99.90th=[ 262], 99.95th=[ 465], 00:09:31.607 | 99.99th=[ 490] 00:09:31.607 write: IOPS=2585, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 00:09:31.607 slat (nsec): min=20718, max=72762, avg=25823.43, stdev=5571.11 00:09:31.607 clat (usec): min=123, max=247, avg=147.07, stdev= 9.49 00:09:31.607 lat (usec): min=145, max=320, avg=172.89, stdev=11.59 00:09:31.607 clat percentiles (usec): 00:09:31.607 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:09:31.607 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:09:31.607 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 163], 00:09:31.607 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 188], 99.95th=[ 204], 00:09:31.607 | 99.99th=[ 247] 00:09:31.607 bw ( KiB/s): min=12288, max=12288, per=31.52%, avg=12288.00, stdev= 0.00, samples=1 00:09:31.607 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:31.607 lat (usec) : 250=99.92%, 500=0.08% 00:09:31.607 cpu : usr=2.20%, sys=8.50%, ctx=5148, majf=0, minf=12 00:09:31.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.607 issued rwts: total=2560,2588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.608 job3: (groupid=0, jobs=1): err= 0: pid=71550: Thu Jul 25 12:16:46 2024 00:09:31.608 read: IOPS=2608, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:09:31.608 slat (nsec): min=15108, max=61738, avg=18489.10, stdev=5356.04 00:09:31.608 clat (usec): min=149, max=430, avg=170.00, stdev=11.46 00:09:31.608 lat (usec): min=165, max=452, avg=188.49, stdev=13.46 00:09:31.608 clat percentiles (usec): 00:09:31.608 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 161], 00:09:31.608 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:09:31.608 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 188], 00:09:31.608 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 235], 99.95th=[ 383], 00:09:31.608 | 99.99th=[ 433] 00:09:31.608 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:31.608 slat (usec): min=20, max=119, avg=27.93, stdev= 8.32 00:09:31.608 clat (usec): min=111, max=458, avg=133.60, stdev=15.33 00:09:31.608 lat (usec): min=134, max=496, avg=161.54, stdev=19.19 00:09:31.608 clat percentiles (usec): 00:09:31.608 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 125], 00:09:31.608 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:09:31.608 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 151], 00:09:31.608 | 99.00th=[ 167], 99.50th=[ 221], 99.90th=[ 314], 99.95th=[ 355], 00:09:31.608 | 99.99th=[ 457] 00:09:31.608 bw ( KiB/s): min=12288, max=12288, per=31.52%, avg=12288.00, stdev= 0.00, samples=1 00:09:31.608 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:31.608 lat (usec) : 250=99.81%, 500=0.19% 00:09:31.608 cpu : usr=3.60%, sys=8.90%, ctx=5683, majf=0, minf=13 00:09:31.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.608 issued rwts: total=2611,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.608 00:09:31.608 Run status group 0 (all jobs): 00:09:31.608 READ: bw=34.5MiB/s (36.2MB/s), 7341KiB/s-10.2MiB/s (7517kB/s-10.7MB/s), io=34.6MiB (36.2MB), run=1001-1001msec 00:09:31.608 WRITE: bw=38.1MiB/s (39.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=38.1MiB (40.0MB), run=1001-1001msec 00:09:31.608 00:09:31.608 Disk stats (read/write): 00:09:31.608 nvme0n1: ios=1586/1798, merge=0/0, ticks=429/388, in_queue=817, util=87.47% 00:09:31.608 nvme0n2: ios=1584/1798, merge=0/0, ticks=442/376, in_queue=818, util=88.65% 00:09:31.608 nvme0n3: ios=2048/2389, merge=0/0, ticks=406/375, in_queue=781, util=89.10% 00:09:31.608 nvme0n4: ios=2343/2560, merge=0/0, ticks=408/367, in_queue=775, util=89.75% 00:09:31.608 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:31.608 [global] 00:09:31.608 thread=1 00:09:31.608 invalidate=1 00:09:31.608 rw=randwrite 00:09:31.608 time_based=1 00:09:31.608 runtime=1 00:09:31.608 ioengine=libaio 00:09:31.608 direct=1 00:09:31.608 bs=4096 00:09:31.608 iodepth=1 00:09:31.608 norandommap=0 00:09:31.608 numjobs=1 00:09:31.608 00:09:31.608 verify_dump=1 00:09:31.608 verify_backlog=512 00:09:31.608 verify_state_save=0 00:09:31.608 do_verify=1 00:09:31.608 verify=crc32c-intel 00:09:31.608 [job0] 00:09:31.608 filename=/dev/nvme0n1 00:09:31.608 [job1] 00:09:31.608 filename=/dev/nvme0n2 00:09:31.608 [job2] 00:09:31.608 filename=/dev/nvme0n3 00:09:31.608 [job3] 00:09:31.608 filename=/dev/nvme0n4 00:09:31.608 Could not set queue depth (nvme0n1) 00:09:31.608 Could not set queue depth (nvme0n2) 00:09:31.608 Could not set queue depth (nvme0n3) 00:09:31.608 Could not set queue depth (nvme0n4) 00:09:31.608 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.608 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.608 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.608 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.608 fio-3.35 00:09:31.608 Starting 4 threads 00:09:32.982 00:09:32.982 job0: (groupid=0, jobs=1): err= 0: pid=71604: Thu Jul 25 12:16:47 2024 00:09:32.982 read: IOPS=2975, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec) 00:09:32.982 slat (nsec): min=13610, max=49491, avg=16871.46, stdev=3023.65 00:09:32.982 clat (usec): min=141, max=2084, avg=163.49, stdev=36.71 00:09:32.982 lat (usec): min=155, max=2100, avg=180.36, stdev=36.88 00:09:32.982 clat percentiles (usec): 00:09:32.982 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:09:32.982 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:09:32.982 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 176], 95.00th=[ 182], 00:09:32.982 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 245], 99.95th=[ 359], 00:09:32.982 | 99.99th=[ 2089] 00:09:32.982 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:32.982 slat (usec): min=18, max=118, avg=22.74, stdev= 4.27 00:09:32.982 clat (usec): min=99, max=735, avg=124.29, stdev=19.50 00:09:32.983 lat (usec): min=119, max=756, avg=147.03, stdev=20.55 00:09:32.983 clat percentiles (usec): 00:09:32.983 | 1.00th=[ 106], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 116], 00:09:32.983 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:09:32.983 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 141], 00:09:32.983 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 412], 99.95th=[ 586], 00:09:32.983 | 99.99th=[ 734] 00:09:32.983 bw ( KiB/s): min=12288, max=12288, per=31.23%, avg=12288.00, stdev= 0.00, samples=1 00:09:32.983 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:32.983 lat (usec) : 100=0.02%, 250=99.87%, 500=0.05%, 750=0.05% 00:09:32.983 lat (msec) : 4=0.02% 00:09:32.983 cpu : usr=2.60%, sys=8.80%, ctx=6050, majf=0, minf=12 00:09:32.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.983 issued rwts: total=2978,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.983 job1: (groupid=0, jobs=1): err= 0: pid=71605: Thu Jul 25 12:16:47 2024 00:09:32.983 read: IOPS=1611, BW=6446KiB/s (6600kB/s)(6452KiB/1001msec) 00:09:32.983 slat (nsec): min=13037, max=41552, avg=17110.77, stdev=2874.59 00:09:32.983 clat (usec): min=143, max=1300, avg=293.44, stdev=48.68 00:09:32.983 lat (usec): min=159, max=1323, avg=310.55, stdev=48.92 00:09:32.983 clat percentiles (usec): 00:09:32.983 | 1.00th=[ 180], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:09:32.983 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 285], 60.00th=[ 289], 00:09:32.983 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 379], 00:09:32.983 | 99.00th=[ 449], 99.50th=[ 545], 99.90th=[ 799], 99.95th=[ 1303], 00:09:32.983 | 99.99th=[ 1303] 00:09:32.983 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:32.983 slat (usec): min=20, max=117, avg=25.83, stdev= 4.74 00:09:32.983 clat (usec): min=101, max=487, avg=214.20, stdev=29.15 00:09:32.983 lat (usec): min=126, max=515, avg=240.02, stdev=29.06 00:09:32.983 clat percentiles (usec): 00:09:32.983 | 1.00th=[ 111], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 202], 00:09:32.983 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:09:32.983 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 253], 00:09:32.983 | 99.00th=[ 269], 99.50th=[ 310], 99.90th=[ 412], 99.95th=[ 474], 00:09:32.983 | 99.99th=[ 486] 00:09:32.983 bw ( KiB/s): min= 8192, max= 8192, per=20.82%, avg=8192.00, stdev= 0.00, samples=1 00:09:32.983 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:32.983 lat (usec) : 250=53.54%, 500=46.16%, 750=0.25%, 1000=0.03% 00:09:32.983 lat (msec) : 2=0.03% 00:09:32.983 cpu : usr=1.90%, sys=5.60%, ctx=3661, majf=0, minf=5 00:09:32.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.983 issued rwts: total=1613,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.983 job2: (groupid=0, jobs=1): err= 0: pid=71606: Thu Jul 25 12:16:47 2024 00:09:32.983 read: IOPS=1590, BW=6362KiB/s (6514kB/s)(6368KiB/1001msec) 00:09:32.983 slat (nsec): min=16156, max=56164, avg=21048.92, stdev=2689.86 00:09:32.983 clat (usec): min=178, max=803, avg=282.50, stdev=25.80 00:09:32.983 lat (usec): min=198, max=823, avg=303.55, stdev=25.80 00:09:32.983 clat percentiles (usec): 00:09:32.983 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 269], 00:09:32.983 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:09:32.983 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:09:32.983 | 99.00th=[ 347], 99.50th=[ 445], 99.90th=[ 545], 99.95th=[ 807], 00:09:32.983 | 99.99th=[ 807] 00:09:32.983 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:32.983 slat (usec): min=19, max=362, avg=27.64, stdev= 9.11 00:09:32.983 clat (usec): min=82, max=3302, avg=220.90, stdev=118.17 00:09:32.983 lat (usec): min=137, max=3345, avg=248.53, stdev=118.95 00:09:32.983 clat percentiles (usec): 00:09:32.983 | 1.00th=[ 129], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 202], 00:09:32.983 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 217], 00:09:32.983 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 247], 00:09:32.983 | 99.00th=[ 392], 99.50th=[ 660], 99.90th=[ 2474], 99.95th=[ 2737], 00:09:32.983 | 99.99th=[ 3294] 00:09:32.983 bw ( KiB/s): min= 8192, max= 8192, per=20.82%, avg=8192.00, stdev= 0.00, samples=1 00:09:32.983 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:32.983 lat (usec) : 100=0.03%, 250=54.75%, 500=44.73%, 750=0.22%, 1000=0.14% 00:09:32.983 lat (msec) : 2=0.05%, 4=0.08% 00:09:32.983 cpu : usr=2.40%, sys=6.10%, ctx=3644, majf=0, minf=13 00:09:32.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.983 issued rwts: total=1592,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.983 job3: (groupid=0, jobs=1): err= 0: pid=71607: Thu Jul 25 12:16:47 2024 00:09:32.983 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:32.983 slat (nsec): min=14748, max=40868, avg=16708.68, stdev=2305.82 00:09:32.983 clat (usec): min=172, max=746, avg=193.08, stdev=15.65 00:09:32.983 lat (usec): min=188, max=765, avg=209.79, stdev=15.81 00:09:32.983 clat percentiles (usec): 00:09:32.983 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:09:32.983 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 194], 00:09:32.983 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 212], 00:09:32.983 | 99.00th=[ 225], 99.50th=[ 229], 99.90th=[ 302], 99.95th=[ 392], 00:09:32.983 | 99.99th=[ 750] 00:09:32.983 write: IOPS=2677, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:09:32.983 slat (usec): min=19, max=113, avg=23.28, stdev= 4.45 00:09:32.983 clat (usec): min=123, max=395, avg=145.73, stdev=11.26 00:09:32.983 lat (usec): min=145, max=418, avg=169.01, stdev=12.66 00:09:32.983 clat percentiles (usec): 00:09:32.983 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:09:32.983 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:09:32.983 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:09:32.983 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 219], 99.95th=[ 281], 00:09:32.983 | 99.99th=[ 396] 00:09:32.983 bw ( KiB/s): min=12288, max=12288, per=31.23%, avg=12288.00, stdev= 0.00, samples=1 00:09:32.983 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:32.983 lat (usec) : 250=99.85%, 500=0.13%, 750=0.02% 00:09:32.983 cpu : usr=2.40%, sys=7.50%, ctx=5240, majf=0, minf=15 00:09:32.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.983 issued rwts: total=2560,2680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.983 00:09:32.983 Run status group 0 (all jobs): 00:09:32.983 READ: bw=34.1MiB/s (35.8MB/s), 6362KiB/s-11.6MiB/s (6514kB/s-12.2MB/s), io=34.2MiB (35.8MB), run=1001-1001msec 00:09:32.983 WRITE: bw=38.4MiB/s (40.3MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=38.5MiB (40.3MB), run=1001-1001msec 00:09:32.983 00:09:32.983 Disk stats (read/write): 00:09:32.983 nvme0n1: ios=2610/2658, merge=0/0, ticks=448/352, in_queue=800, util=88.38% 00:09:32.983 nvme0n2: ios=1585/1597, merge=0/0, ticks=481/362, in_queue=843, util=89.10% 00:09:32.983 nvme0n3: ios=1536/1555, merge=0/0, ticks=446/358, in_queue=804, util=89.27% 00:09:32.983 nvme0n4: ios=2048/2518, merge=0/0, ticks=406/396, in_queue=802, util=89.83% 00:09:32.983 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:32.983 [global] 00:09:32.983 thread=1 00:09:32.983 invalidate=1 00:09:32.983 rw=write 00:09:32.983 time_based=1 00:09:32.983 runtime=1 00:09:32.983 ioengine=libaio 00:09:32.983 direct=1 00:09:32.983 bs=4096 00:09:32.983 iodepth=128 00:09:32.983 norandommap=0 00:09:32.983 numjobs=1 00:09:32.983 00:09:32.983 verify_dump=1 00:09:32.983 verify_backlog=512 00:09:32.983 verify_state_save=0 00:09:32.983 do_verify=1 00:09:32.983 verify=crc32c-intel 00:09:32.983 [job0] 00:09:32.983 filename=/dev/nvme0n1 00:09:32.983 [job1] 00:09:32.983 filename=/dev/nvme0n2 00:09:32.983 [job2] 00:09:32.983 filename=/dev/nvme0n3 00:09:32.983 [job3] 00:09:32.983 filename=/dev/nvme0n4 00:09:32.983 Could not set queue depth (nvme0n1) 00:09:32.983 Could not set queue depth (nvme0n2) 00:09:32.983 Could not set queue depth (nvme0n3) 00:09:32.983 Could not set queue depth (nvme0n4) 00:09:32.983 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.983 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.983 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.983 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.983 fio-3.35 00:09:32.983 Starting 4 threads 00:09:34.369 00:09:34.369 job0: (groupid=0, jobs=1): err= 0: pid=71661: Thu Jul 25 12:16:48 2024 00:09:34.369 read: IOPS=2850, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1003msec) 00:09:34.369 slat (usec): min=3, max=5521, avg=169.04, stdev=634.35 00:09:34.369 clat (usec): min=557, max=28490, avg=21283.90, stdev=2973.37 00:09:34.369 lat (usec): min=2087, max=28802, avg=21452.94, stdev=2933.91 00:09:34.369 clat percentiles (usec): 00:09:34.369 | 1.00th=[ 5014], 5.00th=[17695], 10.00th=[18744], 20.00th=[20317], 00:09:34.369 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21627], 60.00th=[21890], 00:09:34.369 | 70.00th=[22414], 80.00th=[22938], 90.00th=[23725], 95.00th=[24773], 00:09:34.369 | 99.00th=[26346], 99.50th=[27657], 99.90th=[28443], 99.95th=[28443], 00:09:34.369 | 99.99th=[28443] 00:09:34.369 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:34.369 slat (usec): min=7, max=6179, avg=160.41, stdev=583.61 00:09:34.369 clat (usec): min=14629, max=28713, avg=21255.25, stdev=1878.85 00:09:34.369 lat (usec): min=14973, max=28730, avg=21415.66, stdev=1816.36 00:09:34.369 clat percentiles (usec): 00:09:34.369 | 1.00th=[16188], 5.00th=[18220], 10.00th=[19530], 20.00th=[20055], 00:09:34.369 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21103], 60.00th=[21627], 00:09:34.369 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22938], 95.00th=[25035], 00:09:34.369 | 99.00th=[27395], 99.50th=[28181], 99.90th=[28705], 99.95th=[28705], 00:09:34.369 | 99.99th=[28705] 00:09:34.369 bw ( KiB/s): min=12288, max=12312, per=18.82%, avg=12300.00, stdev=16.97, samples=2 00:09:34.369 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:34.369 lat (usec) : 750=0.02% 00:09:34.369 lat (msec) : 4=0.20%, 10=0.54%, 20=15.93%, 50=83.31% 00:09:34.369 cpu : usr=2.59%, sys=8.98%, ctx=1006, majf=0, minf=12 00:09:34.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:34.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.369 issued rwts: total=2859,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.369 job1: (groupid=0, jobs=1): err= 0: pid=71662: Thu Jul 25 12:16:48 2024 00:09:34.369 read: IOPS=2875, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1003msec) 00:09:34.369 slat (usec): min=2, max=6076, avg=170.38, stdev=650.54 00:09:34.369 clat (usec): min=2075, max=29202, avg=20990.06, stdev=2650.14 00:09:34.369 lat (usec): min=4573, max=29219, avg=21160.45, stdev=2622.73 00:09:34.369 clat percentiles (usec): 00:09:34.369 | 1.00th=[ 7111], 5.00th=[17433], 10.00th=[18744], 20.00th=[19792], 00:09:34.369 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21365], 60.00th=[21627], 00:09:34.369 | 70.00th=[21890], 80.00th=[22676], 90.00th=[23462], 95.00th=[23987], 00:09:34.369 | 99.00th=[26084], 99.50th=[26608], 99.90th=[29230], 99.95th=[29230], 00:09:34.369 | 99.99th=[29230] 00:09:34.369 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:34.369 slat (usec): min=4, max=5685, avg=158.00, stdev=599.99 00:09:34.369 clat (usec): min=14803, max=28949, avg=21381.12, stdev=1819.22 00:09:34.369 lat (usec): min=15750, max=28966, avg=21539.12, stdev=1744.13 00:09:34.369 clat percentiles (usec): 00:09:34.369 | 1.00th=[16188], 5.00th=[18482], 10.00th=[19530], 20.00th=[20317], 00:09:34.369 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:09:34.369 | 70.00th=[21890], 80.00th=[22414], 90.00th=[23200], 95.00th=[24773], 00:09:34.369 | 99.00th=[27132], 99.50th=[27657], 99.90th=[28181], 99.95th=[28967], 00:09:34.369 | 99.99th=[28967] 00:09:34.369 bw ( KiB/s): min=12288, max=12288, per=18.81%, avg=12288.00, stdev= 0.00, samples=2 00:09:34.369 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:34.369 lat (msec) : 4=0.02%, 10=0.62%, 20=16.82%, 50=82.54% 00:09:34.369 cpu : usr=2.79%, sys=8.58%, ctx=971, majf=0, minf=9 00:09:34.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:34.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.369 issued rwts: total=2884,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.369 job2: (groupid=0, jobs=1): err= 0: pid=71663: Thu Jul 25 12:16:48 2024 00:09:34.369 read: IOPS=4818, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1003msec) 00:09:34.369 slat (usec): min=6, max=3132, avg=97.76, stdev=456.00 00:09:34.369 clat (usec): min=357, max=15839, avg=12938.25, stdev=1240.39 00:09:34.369 lat (usec): min=2771, max=17601, avg=13036.01, stdev=1173.20 00:09:34.369 clat percentiles (usec): 00:09:34.369 | 1.00th=[ 6063], 5.00th=[10814], 10.00th=[12518], 20.00th=[12911], 00:09:34.369 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:09:34.369 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13960], 00:09:34.369 | 99.00th=[14484], 99.50th=[14615], 99.90th=[15139], 99.95th=[15533], 00:09:34.369 | 99.99th=[15795] 00:09:34.369 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:34.369 slat (usec): min=9, max=3178, avg=95.55, stdev=421.65 00:09:34.369 clat (usec): min=10070, max=15176, avg=12528.10, stdev=1311.87 00:09:34.369 lat (usec): min=10092, max=15268, avg=12623.65, stdev=1312.84 00:09:34.369 clat percentiles (usec): 00:09:34.369 | 1.00th=[10552], 5.00th=[10945], 10.00th=[11076], 20.00th=[11207], 00:09:34.369 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12125], 60.00th=[13435], 00:09:34.369 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14091], 95.00th=[14353], 00:09:34.369 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15139], 99.95th=[15139], 00:09:34.369 | 99.99th=[15139] 00:09:34.369 bw ( KiB/s): min=20480, max=20480, per=31.34%, avg=20480.00, stdev= 0.00, samples=2 00:09:34.369 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:34.369 lat (usec) : 500=0.01% 00:09:34.369 lat (msec) : 4=0.32%, 10=0.62%, 20=99.05% 00:09:34.369 cpu : usr=5.09%, sys=13.47%, ctx=495, majf=0, minf=11 00:09:34.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:34.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.369 issued rwts: total=4833,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.369 job3: (groupid=0, jobs=1): err= 0: pid=71664: Thu Jul 25 12:16:48 2024 00:09:34.369 read: IOPS=5096, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:09:34.369 slat (usec): min=8, max=4038, avg=96.77, stdev=495.38 00:09:34.369 clat (usec): min=487, max=16394, avg=12406.29, stdev=1339.39 00:09:34.369 lat (usec): min=3946, max=16428, avg=12503.06, stdev=1373.64 00:09:34.369 clat percentiles (usec): 00:09:34.369 | 1.00th=[ 8094], 5.00th=[10028], 10.00th=[10814], 20.00th=[12125], 00:09:34.369 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:09:34.369 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[14222], 00:09:34.369 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16188], 99.95th=[16319], 00:09:34.369 | 99.99th=[16450] 00:09:34.369 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:34.369 slat (usec): min=10, max=3836, avg=90.89, stdev=372.53 00:09:34.369 clat (usec): min=8456, max=16438, avg=12362.10, stdev=1285.31 00:09:34.369 lat (usec): min=8483, max=16459, avg=12452.99, stdev=1261.39 00:09:34.369 clat percentiles (usec): 00:09:34.369 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11863], 00:09:34.369 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:09:34.369 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13566], 95.00th=[13829], 00:09:34.369 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16319], 99.95th=[16450], 00:09:34.369 | 99.99th=[16450] 00:09:34.369 bw ( KiB/s): min=20439, max=20480, per=31.31%, avg=20459.50, stdev=28.99, samples=2 00:09:34.369 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:09:34.369 lat (usec) : 500=0.01% 00:09:34.369 lat (msec) : 4=0.03%, 10=6.61%, 20=93.35% 00:09:34.369 cpu : usr=3.69%, sys=15.27%, ctx=546, majf=0, minf=9 00:09:34.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:34.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.369 issued rwts: total=5112,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.369 00:09:34.369 Run status group 0 (all jobs): 00:09:34.369 READ: bw=61.1MiB/s (64.1MB/s), 11.1MiB/s-19.9MiB/s (11.7MB/s-20.9MB/s), io=61.3MiB (64.3MB), run=1003-1003msec 00:09:34.369 WRITE: bw=63.8MiB/s (66.9MB/s), 12.0MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1003-1003msec 00:09:34.369 00:09:34.369 Disk stats (read/write): 00:09:34.369 nvme0n1: ios=2586/2560, merge=0/0, ticks=12912/11740, in_queue=24652, util=87.78% 00:09:34.369 nvme0n2: ios=2609/2575, merge=0/0, ticks=12871/12044, in_queue=24915, util=88.98% 00:09:34.369 nvme0n3: ios=4096/4460, merge=0/0, ticks=12274/11979, in_queue=24253, util=88.87% 00:09:34.369 nvme0n4: ios=4180/4608, merge=0/0, ticks=15832/16531, in_queue=32363, util=89.80% 00:09:34.369 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:34.369 [global] 00:09:34.369 thread=1 00:09:34.369 invalidate=1 00:09:34.369 rw=randwrite 00:09:34.369 time_based=1 00:09:34.369 runtime=1 00:09:34.369 ioengine=libaio 00:09:34.369 direct=1 00:09:34.369 bs=4096 00:09:34.369 iodepth=128 00:09:34.369 norandommap=0 00:09:34.369 numjobs=1 00:09:34.369 00:09:34.369 verify_dump=1 00:09:34.369 verify_backlog=512 00:09:34.369 verify_state_save=0 00:09:34.369 do_verify=1 00:09:34.369 verify=crc32c-intel 00:09:34.369 [job0] 00:09:34.369 filename=/dev/nvme0n1 00:09:34.369 [job1] 00:09:34.369 filename=/dev/nvme0n2 00:09:34.369 [job2] 00:09:34.369 filename=/dev/nvme0n3 00:09:34.369 [job3] 00:09:34.369 filename=/dev/nvme0n4 00:09:34.369 Could not set queue depth (nvme0n1) 00:09:34.369 Could not set queue depth (nvme0n2) 00:09:34.369 Could not set queue depth (nvme0n3) 00:09:34.369 Could not set queue depth (nvme0n4) 00:09:34.369 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.369 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.369 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.369 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.369 fio-3.35 00:09:34.369 Starting 4 threads 00:09:35.741 00:09:35.741 job0: (groupid=0, jobs=1): err= 0: pid=71723: Thu Jul 25 12:16:50 2024 00:09:35.741 read: IOPS=6309, BW=24.6MiB/s (25.8MB/s)(24.7MiB/1002msec) 00:09:35.741 slat (usec): min=9, max=3048, avg=73.55, stdev=328.47 00:09:35.741 clat (usec): min=1216, max=13517, avg=9910.15, stdev=1093.38 00:09:35.741 lat (usec): min=1229, max=13886, avg=9983.71, stdev=1058.06 00:09:35.741 clat percentiles (usec): 00:09:35.741 | 1.00th=[ 6325], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[ 9503], 00:09:35.741 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9896], 00:09:35.741 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[11469], 00:09:35.741 | 99.00th=[11994], 99.50th=[12387], 99.90th=[13042], 99.95th=[13435], 00:09:35.741 | 99.99th=[13566] 00:09:35.741 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:09:35.741 slat (usec): min=11, max=2675, avg=73.07, stdev=304.47 00:09:35.741 clat (usec): min=7421, max=12502, avg=9612.67, stdev=1145.46 00:09:35.741 lat (usec): min=7449, max=12524, avg=9685.74, stdev=1144.96 00:09:35.741 clat percentiles (usec): 00:09:35.741 | 1.00th=[ 7767], 5.00th=[ 8094], 10.00th=[ 8225], 20.00th=[ 8455], 00:09:35.741 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:09:35.742 | 70.00th=[10159], 80.00th=[10290], 90.00th=[11469], 95.00th=[11863], 00:09:35.742 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12518], 99.95th=[12518], 00:09:35.742 | 99.99th=[12518] 00:09:35.742 bw ( KiB/s): min=25000, max=28248, per=46.47%, avg=26624.00, stdev=2296.68, samples=2 00:09:35.742 iops : min= 6250, max= 7062, avg=6656.00, stdev=574.17, samples=2 00:09:35.742 lat (msec) : 2=0.14%, 4=0.21%, 10=62.41%, 20=37.25% 00:09:35.742 cpu : usr=5.29%, sys=16.38%, ctx=720, majf=0, minf=13 00:09:35.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:35.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.742 issued rwts: total=6322,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.742 job1: (groupid=0, jobs=1): err= 0: pid=71724: Thu Jul 25 12:16:50 2024 00:09:35.742 read: IOPS=2047, BW=8190KiB/s (8387kB/s)(8272KiB/1010msec) 00:09:35.742 slat (usec): min=3, max=10781, avg=185.84, stdev=912.74 00:09:35.742 clat (usec): min=9004, max=39664, avg=23908.56, stdev=3287.94 00:09:35.742 lat (usec): min=10155, max=42904, avg=24094.39, stdev=3387.54 00:09:35.742 clat percentiles (usec): 00:09:35.742 | 1.00th=[17695], 5.00th=[20055], 10.00th=[20841], 20.00th=[21890], 00:09:35.742 | 30.00th=[22152], 40.00th=[22938], 50.00th=[23987], 60.00th=[24249], 00:09:35.742 | 70.00th=[24511], 80.00th=[25297], 90.00th=[27395], 95.00th=[29754], 00:09:35.742 | 99.00th=[36439], 99.50th=[37487], 99.90th=[39584], 99.95th=[39584], 00:09:35.742 | 99.99th=[39584] 00:09:35.742 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:09:35.742 slat (usec): min=5, max=15709, avg=233.36, stdev=1092.25 00:09:35.742 clat (usec): min=12778, max=57370, avg=30368.13, stdev=11245.40 00:09:35.742 lat (usec): min=12788, max=57399, avg=30601.50, stdev=11351.99 00:09:35.742 clat percentiles (usec): 00:09:35.742 | 1.00th=[16057], 5.00th=[20055], 10.00th=[20579], 20.00th=[21103], 00:09:35.742 | 30.00th=[21627], 40.00th=[22938], 50.00th=[23462], 60.00th=[27919], 00:09:35.742 | 70.00th=[40633], 80.00th=[42730], 90.00th=[48497], 95.00th=[50594], 00:09:35.742 | 99.00th=[52691], 99.50th=[54264], 99.90th=[55837], 99.95th=[56886], 00:09:35.742 | 99.99th=[57410] 00:09:35.742 bw ( KiB/s): min= 7328, max=12312, per=17.14%, avg=9820.00, stdev=3524.22, samples=2 00:09:35.742 iops : min= 1832, max= 3078, avg=2455.00, stdev=881.06, samples=2 00:09:35.742 lat (msec) : 10=0.02%, 20=4.95%, 50=91.75%, 100=3.28% 00:09:35.742 cpu : usr=2.28%, sys=6.34%, ctx=534, majf=0, minf=19 00:09:35.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:09:35.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.742 issued rwts: total=2068,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.742 job2: (groupid=0, jobs=1): err= 0: pid=71725: Thu Jul 25 12:16:50 2024 00:09:35.742 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:09:35.742 slat (usec): min=6, max=20734, avg=180.02, stdev=1181.83 00:09:35.742 clat (usec): min=8667, max=85109, avg=23635.21, stdev=16514.87 00:09:35.742 lat (usec): min=8692, max=85128, avg=23815.22, stdev=16601.73 00:09:35.742 clat percentiles (usec): 00:09:35.742 | 1.00th=[10683], 5.00th=[11863], 10.00th=[12911], 20.00th=[13042], 00:09:35.742 | 30.00th=[14091], 40.00th=[16909], 50.00th=[18744], 60.00th=[19530], 00:09:35.742 | 70.00th=[20579], 80.00th=[22414], 90.00th=[58459], 95.00th=[67634], 00:09:35.742 | 99.00th=[82314], 99.50th=[82314], 99.90th=[85459], 99.95th=[85459], 00:09:35.742 | 99.99th=[85459] 00:09:35.742 write: IOPS=2678, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1005msec); 0 zone resets 00:09:35.742 slat (usec): min=9, max=15282, avg=191.63, stdev=969.31 00:09:35.742 clat (usec): min=3978, max=87457, avg=24668.25, stdev=16652.07 00:09:35.742 lat (usec): min=4000, max=87480, avg=24859.88, stdev=16745.67 00:09:35.742 clat percentiles (usec): 00:09:35.742 | 1.00th=[ 7832], 5.00th=[10683], 10.00th=[11076], 20.00th=[12649], 00:09:35.742 | 30.00th=[13566], 40.00th=[14353], 50.00th=[20579], 60.00th=[21103], 00:09:35.742 | 70.00th=[21627], 80.00th=[41681], 90.00th=[51119], 95.00th=[52167], 00:09:35.742 | 99.00th=[84411], 99.50th=[84411], 99.90th=[87557], 99.95th=[87557], 00:09:35.742 | 99.99th=[87557] 00:09:35.742 bw ( KiB/s): min= 8280, max=12288, per=17.95%, avg=10284.00, stdev=2834.08, samples=2 00:09:35.742 iops : min= 2070, max= 3072, avg=2571.00, stdev=708.52, samples=2 00:09:35.742 lat (msec) : 4=0.02%, 10=0.89%, 20=54.38%, 50=33.02%, 100=11.69% 00:09:35.742 cpu : usr=2.39%, sys=7.97%, ctx=328, majf=0, minf=9 00:09:35.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:35.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.742 issued rwts: total=2560,2692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.742 job3: (groupid=0, jobs=1): err= 0: pid=71726: Thu Jul 25 12:16:50 2024 00:09:35.742 read: IOPS=2084, BW=8337KiB/s (8537kB/s)(8412KiB/1009msec) 00:09:35.742 slat (usec): min=4, max=10662, avg=194.98, stdev=944.08 00:09:35.742 clat (usec): min=7727, max=46301, avg=23617.77, stdev=3758.61 00:09:35.742 lat (usec): min=8799, max=46311, avg=23812.75, stdev=3856.33 00:09:35.742 clat percentiles (usec): 00:09:35.742 | 1.00th=[16188], 5.00th=[17171], 10.00th=[20317], 20.00th=[21890], 00:09:35.742 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23725], 60.00th=[23987], 00:09:35.742 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26608], 95.00th=[28705], 00:09:35.742 | 99.00th=[41157], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:09:35.742 | 99.99th=[46400] 00:09:35.742 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:09:35.742 slat (usec): min=5, max=10948, avg=223.25, stdev=1039.57 00:09:35.742 clat (usec): min=15261, max=59207, avg=30335.94, stdev=11085.43 00:09:35.742 lat (usec): min=15322, max=59245, avg=30559.18, stdev=11198.86 00:09:35.742 clat percentiles (usec): 00:09:35.742 | 1.00th=[16712], 5.00th=[20317], 10.00th=[20579], 20.00th=[21365], 00:09:35.742 | 30.00th=[22152], 40.00th=[22938], 50.00th=[23725], 60.00th=[26084], 00:09:35.742 | 70.00th=[39584], 80.00th=[42206], 90.00th=[48497], 95.00th=[50070], 00:09:35.742 | 99.00th=[54264], 99.50th=[54789], 99.90th=[57410], 99.95th=[57410], 00:09:35.742 | 99.99th=[58983] 00:09:35.742 bw ( KiB/s): min= 7616, max=12288, per=17.37%, avg=9952.00, stdev=3303.60, samples=2 00:09:35.742 iops : min= 1904, max= 3072, avg=2488.00, stdev=825.90, samples=2 00:09:35.742 lat (msec) : 10=0.11%, 20=6.30%, 50=90.54%, 100=3.05% 00:09:35.742 cpu : usr=1.49%, sys=7.24%, ctx=524, majf=0, minf=11 00:09:35.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:09:35.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.742 issued rwts: total=2103,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.742 00:09:35.742 Run status group 0 (all jobs): 00:09:35.742 READ: bw=50.5MiB/s (52.9MB/s), 8190KiB/s-24.6MiB/s (8387kB/s-25.8MB/s), io=51.0MiB (53.5MB), run=1002-1010msec 00:09:35.742 WRITE: bw=56.0MiB/s (58.7MB/s), 9.90MiB/s-25.9MiB/s (10.4MB/s-27.2MB/s), io=56.5MiB (59.3MB), run=1002-1010msec 00:09:35.742 00:09:35.742 Disk stats (read/write): 00:09:35.742 nvme0n1: ios=5563/5632, merge=0/0, ticks=12552/11131, in_queue=23683, util=88.65% 00:09:35.742 nvme0n2: ios=2088/2104, merge=0/0, ticks=22968/27112, in_queue=50080, util=87.70% 00:09:35.742 nvme0n3: ios=2048/2311, merge=0/0, ticks=11803/14116, in_queue=25919, util=89.13% 00:09:35.742 nvme0n4: ios=2048/2167, merge=0/0, ticks=23349/27464, in_queue=50813, util=88.85% 00:09:35.742 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:35.742 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=71739 00:09:35.742 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:35.742 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:35.742 [global] 00:09:35.742 thread=1 00:09:35.742 invalidate=1 00:09:35.742 rw=read 00:09:35.742 time_based=1 00:09:35.742 runtime=10 00:09:35.742 ioengine=libaio 00:09:35.742 direct=1 00:09:35.742 bs=4096 00:09:35.742 iodepth=1 00:09:35.742 norandommap=1 00:09:35.742 numjobs=1 00:09:35.742 00:09:35.742 [job0] 00:09:35.742 filename=/dev/nvme0n1 00:09:35.742 [job1] 00:09:35.742 filename=/dev/nvme0n2 00:09:35.742 [job2] 00:09:35.742 filename=/dev/nvme0n3 00:09:35.742 [job3] 00:09:35.742 filename=/dev/nvme0n4 00:09:35.742 Could not set queue depth (nvme0n1) 00:09:35.742 Could not set queue depth (nvme0n2) 00:09:35.742 Could not set queue depth (nvme0n3) 00:09:35.742 Could not set queue depth (nvme0n4) 00:09:35.742 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.742 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.742 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.742 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.742 fio-3.35 00:09:35.742 Starting 4 threads 00:09:39.027 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:39.027 fio: pid=71786, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:39.027 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=35360768, buflen=4096 00:09:39.027 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:39.027 fio: pid=71785, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:39.027 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=40402944, buflen=4096 00:09:39.027 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.027 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:39.285 fio: pid=71783, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:39.285 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=46391296, buflen=4096 00:09:39.285 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.285 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:39.544 fio: pid=71784, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:39.544 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=55631872, buflen=4096 00:09:39.544 00:09:39.544 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71783: Thu Jul 25 12:16:54 2024 00:09:39.544 read: IOPS=3298, BW=12.9MiB/s (13.5MB/s)(44.2MiB/3434msec) 00:09:39.544 slat (usec): min=13, max=12736, avg=19.92, stdev=200.37 00:09:39.544 clat (usec): min=129, max=7619, avg=281.56, stdev=168.33 00:09:39.544 lat (usec): min=146, max=13056, avg=301.48, stdev=262.56 00:09:39.544 clat percentiles (usec): 00:09:39.544 | 1.00th=[ 149], 5.00th=[ 225], 10.00th=[ 241], 20.00th=[ 258], 00:09:39.544 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:09:39.544 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:09:39.544 | 99.00th=[ 396], 99.50th=[ 437], 99.90th=[ 3556], 99.95th=[ 3916], 00:09:39.544 | 99.99th=[ 6587] 00:09:39.544 bw ( KiB/s): min=12400, max=13896, per=28.21%, avg=13237.33, stdev=556.60, samples=6 00:09:39.544 iops : min= 3100, max= 3474, avg=3309.33, stdev=139.15, samples=6 00:09:39.544 lat (usec) : 250=13.64%, 500=86.00%, 750=0.08%, 1000=0.06% 00:09:39.544 lat (msec) : 2=0.05%, 4=0.11%, 10=0.04% 00:09:39.544 cpu : usr=0.79%, sys=4.57%, ctx=11336, majf=0, minf=1 00:09:39.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.544 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.544 issued rwts: total=11327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.544 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71784: Thu Jul 25 12:16:54 2024 00:09:39.544 read: IOPS=3671, BW=14.3MiB/s (15.0MB/s)(53.1MiB/3700msec) 00:09:39.544 slat (usec): min=12, max=11826, avg=22.08, stdev=195.94 00:09:39.544 clat (usec): min=41, max=3536, avg=248.46, stdev=78.69 00:09:39.544 lat (usec): min=136, max=12148, avg=270.54, stdev=211.89 00:09:39.544 clat percentiles (usec): 00:09:39.544 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 143], 20.00th=[ 167], 00:09:39.544 | 30.00th=[ 247], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:09:39.544 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:09:39.544 | 99.00th=[ 367], 99.50th=[ 408], 99.90th=[ 1057], 99.95th=[ 1598], 00:09:39.544 | 99.99th=[ 2409] 00:09:39.544 bw ( KiB/s): min=13104, max=18931, per=30.51%, avg=14315.86, stdev=2071.94, samples=7 00:09:39.544 iops : min= 3276, max= 4732, avg=3578.86, stdev=517.71, samples=7 00:09:39.544 lat (usec) : 50=0.01%, 250=32.24%, 500=67.51%, 750=0.07%, 1000=0.06% 00:09:39.544 lat (msec) : 2=0.08%, 4=0.02% 00:09:39.544 cpu : usr=1.16%, sys=5.49%, ctx=13597, majf=0, minf=1 00:09:39.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.544 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.544 issued rwts: total=13583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.544 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71785: Thu Jul 25 12:16:54 2024 00:09:39.544 read: IOPS=3111, BW=12.2MiB/s (12.7MB/s)(38.5MiB/3170msec) 00:09:39.544 slat (usec): min=13, max=11831, avg=25.16, stdev=166.43 00:09:39.544 clat (usec): min=140, max=2618, avg=293.70, stdev=69.98 00:09:39.544 lat (usec): min=156, max=12011, avg=318.85, stdev=180.30 00:09:39.544 clat percentiles (usec): 00:09:39.544 | 1.00th=[ 151], 5.00th=[ 167], 10.00th=[ 258], 20.00th=[ 269], 00:09:39.544 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 00:09:39.544 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 355], 95.00th=[ 420], 00:09:39.544 | 99.00th=[ 486], 99.50th=[ 570], 99.90th=[ 889], 99.95th=[ 1004], 00:09:39.544 | 99.99th=[ 2606] 00:09:39.544 bw ( KiB/s): min=10448, max=13752, per=26.32%, avg=12352.00, stdev=1126.19, samples=6 00:09:39.544 iops : min= 2612, max= 3438, avg=3088.00, stdev=281.55, samples=6 00:09:39.544 lat (usec) : 250=8.93%, 500=90.28%, 750=0.67%, 1000=0.06% 00:09:39.544 lat (msec) : 2=0.04%, 4=0.01% 00:09:39.544 cpu : usr=1.29%, sys=5.81%, ctx=9869, majf=0, minf=1 00:09:39.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.544 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.544 issued rwts: total=9865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.544 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71786: Thu Jul 25 12:16:54 2024 00:09:39.544 read: IOPS=2968, BW=11.6MiB/s (12.2MB/s)(33.7MiB/2909msec) 00:09:39.544 slat (nsec): min=12783, max=94967, avg=18404.84, stdev=6544.10 00:09:39.544 clat (usec): min=154, max=6528, avg=316.55, stdev=144.43 00:09:39.544 lat (usec): min=170, max=6543, avg=334.95, stdev=145.71 00:09:39.544 clat percentiles (usec): 00:09:39.544 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:09:39.544 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:09:39.544 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 375], 95.00th=[ 441], 00:09:39.544 | 99.00th=[ 510], 99.50th=[ 644], 99.90th=[ 2212], 99.95th=[ 4113], 00:09:39.544 | 99.99th=[ 6521] 00:09:39.544 bw ( KiB/s): min=10224, max=12864, per=24.95%, avg=11707.20, stdev=1306.56, samples=5 00:09:39.544 iops : min= 2556, max= 3216, avg=2926.80, stdev=326.64, samples=5 00:09:39.544 lat (usec) : 250=0.27%, 500=98.59%, 750=0.85%, 1000=0.15% 00:09:39.544 lat (msec) : 2=0.01%, 4=0.07%, 10=0.06% 00:09:39.544 cpu : usr=0.96%, sys=4.47%, ctx=8642, majf=0, minf=1 00:09:39.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.544 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.544 issued rwts: total=8634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.544 00:09:39.544 Run status group 0 (all jobs): 00:09:39.544 READ: bw=45.8MiB/s (48.1MB/s), 11.6MiB/s-14.3MiB/s (12.2MB/s-15.0MB/s), io=170MiB (178MB), run=2909-3700msec 00:09:39.544 00:09:39.544 Disk stats (read/write): 00:09:39.544 nvme0n1: ios=11090/0, merge=0/0, ticks=3167/0, in_queue=3167, util=94.74% 00:09:39.545 nvme0n2: ios=13066/0, merge=0/0, ticks=3362/0, in_queue=3362, util=95.40% 00:09:39.545 nvme0n3: ios=9677/0, merge=0/0, ticks=2919/0, in_queue=2919, util=96.03% 00:09:39.545 nvme0n4: ios=8502/0, merge=0/0, ticks=2713/0, in_queue=2713, util=96.53% 00:09:39.545 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.545 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:39.803 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.803 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:40.061 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.061 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:40.319 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.320 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:40.578 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.578 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 71739 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:40.836 nvmf hotplug test: fio failed as expected 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:40.836 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.093 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:41.093 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:41.093 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:41.093 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:41.093 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:41.093 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.093 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:41.093 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.093 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:41.093 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.094 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.094 rmmod nvme_tcp 00:09:41.352 rmmod nvme_fabrics 00:09:41.352 rmmod nvme_keyring 00:09:41.352 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 71243 ']' 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 71243 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 71243 ']' 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 71243 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71243 00:09:41.352 killing process with pid 71243 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71243' 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 71243 00:09:41.352 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 71243 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:41.611 ************************************ 00:09:41.611 END TEST nvmf_fio_target 00:09:41.611 ************************************ 00:09:41.611 00:09:41.611 real 0m19.633s 00:09:41.611 user 1m15.467s 00:09:41.611 sys 0m8.657s 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.611 ************************************ 00:09:41.611 START TEST nvmf_bdevio 00:09:41.611 ************************************ 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:41.611 * Looking for test storage... 00:09:41.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.611 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.612 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:41.879 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:41.879 Cannot find device "nvmf_tgt_br" 00:09:41.879 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:09:41.879 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.879 Cannot find device "nvmf_tgt_br2" 00:09:41.879 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:41.880 Cannot find device "nvmf_tgt_br" 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:41.880 Cannot find device "nvmf_tgt_br2" 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.880 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:42.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:42.159 00:09:42.159 --- 10.0.0.2 ping statistics --- 00:09:42.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.159 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:42.159 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:42.159 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:09:42.159 00:09:42.159 --- 10.0.0.3 ping statistics --- 00:09:42.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.159 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:42.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:42.159 00:09:42.159 --- 10.0.0.1 ping statistics --- 00:09:42.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.159 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=72109 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 72109 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 72109 ']' 00:09:42.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.159 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.159 [2024-07-25 12:16:56.853921] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:09:42.159 [2024-07-25 12:16:56.854059] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.159 [2024-07-25 12:16:56.987216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.417 [2024-07-25 12:16:57.116033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.417 [2024-07-25 12:16:57.116625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.417 [2024-07-25 12:16:57.117131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.417 [2024-07-25 12:16:57.117609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.417 [2024-07-25 12:16:57.117846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.417 [2024-07-25 12:16:57.118215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:42.417 [2024-07-25 12:16:57.118373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:42.417 [2024-07-25 12:16:57.118524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.417 [2024-07-25 12:16:57.118532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:42.984 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.984 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:42.984 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.984 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.984 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.984 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.984 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.984 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.984 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.985 [2024-07-25 12:16:57.843975] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.243 Malloc0 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.243 [2024-07-25 12:16:57.914786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:43.243 { 00:09:43.243 "params": { 00:09:43.243 "name": "Nvme$subsystem", 00:09:43.243 "trtype": "$TEST_TRANSPORT", 00:09:43.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.243 "adrfam": "ipv4", 00:09:43.243 "trsvcid": "$NVMF_PORT", 00:09:43.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.243 "hdgst": ${hdgst:-false}, 00:09:43.243 "ddgst": ${ddgst:-false} 00:09:43.243 }, 00:09:43.243 "method": "bdev_nvme_attach_controller" 00:09:43.243 } 00:09:43.243 EOF 00:09:43.243 )") 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:43.243 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:43.243 "params": { 00:09:43.243 "name": "Nvme1", 00:09:43.243 "trtype": "tcp", 00:09:43.243 "traddr": "10.0.0.2", 00:09:43.244 "adrfam": "ipv4", 00:09:43.244 "trsvcid": "4420", 00:09:43.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.244 "hdgst": false, 00:09:43.244 "ddgst": false 00:09:43.244 }, 00:09:43.244 "method": "bdev_nvme_attach_controller" 00:09:43.244 }' 00:09:43.244 [2024-07-25 12:16:57.982184] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:09:43.244 [2024-07-25 12:16:57.982275] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72163 ] 00:09:43.502 [2024-07-25 12:16:58.120286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.502 [2024-07-25 12:16:58.255220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.502 [2024-07-25 12:16:58.255364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.502 [2024-07-25 12:16:58.255368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.760 I/O targets: 00:09:43.760 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:43.760 00:09:43.760 00:09:43.760 CUnit - A unit testing framework for C - Version 2.1-3 00:09:43.760 http://cunit.sourceforge.net/ 00:09:43.760 00:09:43.760 00:09:43.760 Suite: bdevio tests on: Nvme1n1 00:09:43.760 Test: blockdev write read block ...passed 00:09:43.760 Test: blockdev write zeroes read block ...passed 00:09:43.760 Test: blockdev write zeroes read no split ...passed 00:09:43.760 Test: blockdev write zeroes read split ...passed 00:09:43.760 Test: blockdev write zeroes read split partial ...passed 00:09:43.760 Test: blockdev reset ...[2024-07-25 12:16:58.557802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:43.760 [2024-07-25 12:16:58.558086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d180 (9): Bad file descriptor 00:09:43.760 [2024-07-25 12:16:58.571212] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:43.760 passed 00:09:43.760 Test: blockdev write read 8 blocks ...passed 00:09:43.760 Test: blockdev write read size > 128k ...passed 00:09:43.760 Test: blockdev write read invalid size ...passed 00:09:43.761 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:43.761 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:43.761 Test: blockdev write read max offset ...passed 00:09:44.019 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.019 Test: blockdev writev readv 8 blocks ...passed 00:09:44.019 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.019 Test: blockdev writev readv block ...passed 00:09:44.019 Test: blockdev writev readv size > 128k ...passed 00:09:44.019 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.019 Test: blockdev comparev and writev ...[2024-07-25 12:16:58.745941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.019 [2024-07-25 12:16:58.746119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:44.019 [2024-07-25 12:16:58.746149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.019 [2024-07-25 12:16:58.746162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:44.019 [2024-07-25 12:16:58.746698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.019 [2024-07-25 12:16:58.746722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:44.019 [2024-07-25 12:16:58.746740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.019 [2024-07-25 12:16:58.746750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:44.019 [2024-07-25 12:16:58.747173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.019 [2024-07-25 12:16:58.747194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:44.019 [2024-07-25 12:16:58.747213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.019 [2024-07-25 12:16:58.747223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:44.019 [2024-07-25 12:16:58.747698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.019 [2024-07-25 12:16:58.747848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:44.020 [2024-07-25 12:16:58.748034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.020 [2024-07-25 12:16:58.748243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:44.020 passed 00:09:44.020 Test: blockdev nvme passthru rw ...passed 00:09:44.020 Test: blockdev nvme passthru vendor specific ...[2024-07-25 12:16:58.832035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.020 [2024-07-25 12:16:58.832091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:44.020 [2024-07-25 12:16:58.832457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.020 [2024-07-25 12:16:58.832479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:44.020 [2024-07-25 12:16:58.832681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.020 [2024-07-25 12:16:58.832702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:44.020 [2024-07-25 12:16:58.832882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.020 [2024-07-25 12:16:58.832924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:44.020 passed 00:09:44.020 Test: blockdev nvme admin passthru ...passed 00:09:44.278 Test: blockdev copy ...passed 00:09:44.278 00:09:44.278 Run Summary: Type Total Ran Passed Failed Inactive 00:09:44.278 suites 1 1 n/a 0 0 00:09:44.278 tests 23 23 23 0 0 00:09:44.278 asserts 152 152 152 0 n/a 00:09:44.278 00:09:44.278 Elapsed time = 0.895 seconds 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.278 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.278 rmmod nvme_tcp 00:09:44.536 rmmod nvme_fabrics 00:09:44.536 rmmod nvme_keyring 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 72109 ']' 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 72109 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 72109 ']' 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 72109 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72109 00:09:44.536 killing process with pid 72109 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72109' 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 72109 00:09:44.536 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 72109 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:44.795 ************************************ 00:09:44.795 END TEST nvmf_bdevio 00:09:44.795 ************************************ 00:09:44.795 00:09:44.795 real 0m3.171s 00:09:44.795 user 0m11.476s 00:09:44.795 sys 0m0.768s 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:44.795 00:09:44.795 real 3m35.016s 00:09:44.795 user 11m23.620s 00:09:44.795 sys 1m0.518s 00:09:44.795 ************************************ 00:09:44.795 END TEST nvmf_target_core 00:09:44.795 ************************************ 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.795 12:16:59 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:44.795 12:16:59 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.795 12:16:59 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.795 12:16:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.795 ************************************ 00:09:44.795 START TEST nvmf_target_extra 00:09:44.795 ************************************ 00:09:44.795 12:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:45.054 * Looking for test storage... 00:09:45.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:45.054 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:45.055 ************************************ 00:09:45.055 START TEST nvmf_example 00:09:45.055 ************************************ 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:45.055 * Looking for test storage... 00:09:45.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.055 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:45.056 Cannot find device "nvmf_tgt_br" 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # true 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.056 Cannot find device "nvmf_tgt_br2" 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # true 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:45.056 Cannot find device "nvmf_tgt_br" 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # true 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:45.056 Cannot find device "nvmf_tgt_br2" 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # true 00:09:45.056 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:45.315 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:45.315 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.315 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:09:45.315 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.315 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:09:45.315 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.315 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.315 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.315 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.315 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:45.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:09:45.315 00:09:45.315 --- 10.0.0.2 ping statistics --- 00:09:45.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.315 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:45.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:09:45.315 00:09:45.315 --- 10.0.0.3 ping statistics --- 00:09:45.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.315 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:45.315 00:09:45.315 --- 10.0.0.1 ping statistics --- 00:09:45.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.315 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:45.315 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:45.574 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=72404 00:09:45.574 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:45.574 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:45.574 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 72404 00:09:45.574 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 72404 ']' 00:09:45.574 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.574 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.574 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.574 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.574 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.509 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.509 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:09:46.509 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:09:46.510 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:58.711 Initializing NVMe Controllers 00:09:58.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:58.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:58.711 Initialization complete. Launching workers. 00:09:58.711 ======================================================== 00:09:58.712 Latency(us) 00:09:58.712 Device Information : IOPS MiB/s Average min max 00:09:58.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14833.80 57.94 4316.24 825.61 23158.21 00:09:58.712 ======================================================== 00:09:58.712 Total : 14833.80 57.94 4316.24 825.61 23158.21 00:09:58.712 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:58.712 rmmod nvme_tcp 00:09:58.712 rmmod nvme_fabrics 00:09:58.712 rmmod nvme_keyring 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 72404 ']' 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 72404 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 72404 ']' 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 72404 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72404 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:09:58.712 killing process with pid 72404 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72404' 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 72404 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 72404 00:09:58.712 nvmf threads initialize successfully 00:09:58.712 bdev subsystem init successfully 00:09:58.712 created a nvmf target service 00:09:58.712 create targets's poll groups done 00:09:58.712 all subsystems of target started 00:09:58.712 nvmf target is running 00:09:58.712 all subsystems of target stopped 00:09:58.712 destroy targets's poll groups done 00:09:58.712 destroyed the nvmf target service 00:09:58.712 bdev subsystem finish successfully 00:09:58.712 nvmf threads destroy successfully 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.712 ************************************ 00:09:58.712 END TEST nvmf_example 00:09:58.712 ************************************ 00:09:58.712 00:09:58.712 real 0m12.267s 00:09:58.712 user 0m44.170s 00:09:58.712 sys 0m2.061s 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.712 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:58.712 ************************************ 00:09:58.712 START TEST nvmf_filesystem 00:09:58.712 ************************************ 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:58.712 * Looking for test storage... 00:09:58.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:58.712 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:58.713 #define SPDK_CONFIG_H 00:09:58.713 #define SPDK_CONFIG_APPS 1 00:09:58.713 #define SPDK_CONFIG_ARCH native 00:09:58.713 #undef SPDK_CONFIG_ASAN 00:09:58.713 #define SPDK_CONFIG_AVAHI 1 00:09:58.713 #undef SPDK_CONFIG_CET 00:09:58.713 #define SPDK_CONFIG_COVERAGE 1 00:09:58.713 #define SPDK_CONFIG_CROSS_PREFIX 00:09:58.713 #undef SPDK_CONFIG_CRYPTO 00:09:58.713 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:58.713 #undef SPDK_CONFIG_CUSTOMOCF 00:09:58.713 #undef SPDK_CONFIG_DAOS 00:09:58.713 #define SPDK_CONFIG_DAOS_DIR 00:09:58.713 #define SPDK_CONFIG_DEBUG 1 00:09:58.713 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:58.713 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:58.713 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:58.713 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:58.713 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:58.713 #undef SPDK_CONFIG_DPDK_UADK 00:09:58.713 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:58.713 #define SPDK_CONFIG_EXAMPLES 1 00:09:58.713 #undef SPDK_CONFIG_FC 00:09:58.713 #define SPDK_CONFIG_FC_PATH 00:09:58.713 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:58.713 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:58.713 #undef SPDK_CONFIG_FUSE 00:09:58.713 #undef SPDK_CONFIG_FUZZER 00:09:58.713 #define SPDK_CONFIG_FUZZER_LIB 00:09:58.713 #define SPDK_CONFIG_GOLANG 1 00:09:58.713 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:58.713 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:58.713 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:58.713 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:58.713 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:58.713 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:58.713 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:58.713 #define SPDK_CONFIG_IDXD 1 00:09:58.713 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:58.713 #undef SPDK_CONFIG_IPSEC_MB 00:09:58.713 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:58.713 #define SPDK_CONFIG_ISAL 1 00:09:58.713 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:58.713 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:58.713 #define SPDK_CONFIG_LIBDIR 00:09:58.713 #undef SPDK_CONFIG_LTO 00:09:58.713 #define SPDK_CONFIG_MAX_LCORES 128 00:09:58.713 #define SPDK_CONFIG_NVME_CUSE 1 00:09:58.713 #undef SPDK_CONFIG_OCF 00:09:58.713 #define SPDK_CONFIG_OCF_PATH 00:09:58.713 #define SPDK_CONFIG_OPENSSL_PATH 00:09:58.713 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:58.713 #define SPDK_CONFIG_PGO_DIR 00:09:58.713 #undef SPDK_CONFIG_PGO_USE 00:09:58.713 #define SPDK_CONFIG_PREFIX /usr/local 00:09:58.713 #undef SPDK_CONFIG_RAID5F 00:09:58.713 #undef SPDK_CONFIG_RBD 00:09:58.713 #define SPDK_CONFIG_RDMA 1 00:09:58.713 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:58.713 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:58.713 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:58.713 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:58.713 #define SPDK_CONFIG_SHARED 1 00:09:58.713 #undef SPDK_CONFIG_SMA 00:09:58.713 #define SPDK_CONFIG_TESTS 1 00:09:58.713 #undef SPDK_CONFIG_TSAN 00:09:58.713 #define SPDK_CONFIG_UBLK 1 00:09:58.713 #define SPDK_CONFIG_UBSAN 1 00:09:58.713 #undef SPDK_CONFIG_UNIT_TESTS 00:09:58.713 #undef SPDK_CONFIG_URING 00:09:58.713 #define SPDK_CONFIG_URING_PATH 00:09:58.713 #undef SPDK_CONFIG_URING_ZNS 00:09:58.713 #define SPDK_CONFIG_USDT 1 00:09:58.713 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:58.713 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:58.713 #undef SPDK_CONFIG_VFIO_USER 00:09:58.713 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:58.713 #define SPDK_CONFIG_VHOST 1 00:09:58.713 #define SPDK_CONFIG_VIRTIO 1 00:09:58.713 #undef SPDK_CONFIG_VTUNE 00:09:58.713 #define SPDK_CONFIG_VTUNE_DIR 00:09:58.713 #define SPDK_CONFIG_WERROR 1 00:09:58.713 #define SPDK_CONFIG_WPDK_DIR 00:09:58.713 #undef SPDK_CONFIG_XNVME 00:09:58.713 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:58.713 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:09:58.714 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 72634 ]] 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 72634 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.iApxqH 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.iApxqH/tests/target /tmp/spdk.iApxqH 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=devtmpfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4194304 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4194304 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6257971200 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267891712 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=2487009280 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=2507157504 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=20148224 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13784330240 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5245456384 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13784330240 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5245456384 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda2 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=843546624 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1012768768 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=100016128 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6267756544 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267891712 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=135168 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda3 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=92499968 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=104607744 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=12107776 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1253572608 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253576704 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:09:58.715 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=94253613056 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5449166848 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:09:58.716 * Looking for test storage... 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/home 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=13784330240 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == tmpfs ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == ramfs ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ /home == / ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:58.716 Cannot find device "nvmf_tgt_br" 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.716 Cannot find device "nvmf_tgt_br2" 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:58.716 Cannot find device "nvmf_tgt_br" 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:58.716 Cannot find device "nvmf_tgt_br2" 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:09:58.716 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:58.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:58.717 00:09:58.717 --- 10.0.0.2 ping statistics --- 00:09:58.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.717 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:58.717 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:58.717 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:09:58.717 00:09:58.717 --- 10.0.0.3 ping statistics --- 00:09:58.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.717 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:58.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:09:58.717 00:09:58.717 --- 10.0.0.1 ping statistics --- 00:09:58.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.717 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.717 ************************************ 00:09:58.717 START TEST nvmf_filesystem_no_in_capsule 00:09:58.717 ************************************ 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=72796 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 72796 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 72796 ']' 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:58.717 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.717 [2024-07-25 12:17:12.645264] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:09:58.717 [2024-07-25 12:17:12.645398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.717 [2024-07-25 12:17:12.779886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.717 [2024-07-25 12:17:12.894657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.717 [2024-07-25 12:17:12.894722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.717 [2024-07-25 12:17:12.894735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.717 [2024-07-25 12:17:12.894744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.717 [2024-07-25 12:17:12.894751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.717 [2024-07-25 12:17:12.894900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.717 [2024-07-25 12:17:12.895468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.717 [2024-07-25 12:17:12.895672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.717 [2024-07-25 12:17:12.895677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.975 [2024-07-25 12:17:13.630824] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.975 Malloc1 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.975 [2024-07-25 12:17:13.817604] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.975 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.233 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.233 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:59.233 { 00:09:59.233 "aliases": [ 00:09:59.233 "fb00bdff-ac9e-4017-9860-cc5acf47a691" 00:09:59.233 ], 00:09:59.233 "assigned_rate_limits": { 00:09:59.233 "r_mbytes_per_sec": 0, 00:09:59.233 "rw_ios_per_sec": 0, 00:09:59.233 "rw_mbytes_per_sec": 0, 00:09:59.233 "w_mbytes_per_sec": 0 00:09:59.233 }, 00:09:59.233 "block_size": 512, 00:09:59.233 "claim_type": "exclusive_write", 00:09:59.233 "claimed": true, 00:09:59.233 "driver_specific": {}, 00:09:59.233 "memory_domains": [ 00:09:59.233 { 00:09:59.233 "dma_device_id": "system", 00:09:59.233 "dma_device_type": 1 00:09:59.233 }, 00:09:59.233 { 00:09:59.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.233 "dma_device_type": 2 00:09:59.233 } 00:09:59.233 ], 00:09:59.233 "name": "Malloc1", 00:09:59.233 "num_blocks": 1048576, 00:09:59.233 "product_name": "Malloc disk", 00:09:59.233 "supported_io_types": { 00:09:59.233 "abort": true, 00:09:59.233 "compare": false, 00:09:59.233 "compare_and_write": false, 00:09:59.233 "copy": true, 00:09:59.233 "flush": true, 00:09:59.233 "get_zone_info": false, 00:09:59.233 "nvme_admin": false, 00:09:59.233 "nvme_io": false, 00:09:59.233 "nvme_io_md": false, 00:09:59.233 "nvme_iov_md": false, 00:09:59.233 "read": true, 00:09:59.233 "reset": true, 00:09:59.234 "seek_data": false, 00:09:59.234 "seek_hole": false, 00:09:59.234 "unmap": true, 00:09:59.234 "write": true, 00:09:59.234 "write_zeroes": true, 00:09:59.234 "zcopy": true, 00:09:59.234 "zone_append": false, 00:09:59.234 "zone_management": false 00:09:59.234 }, 00:09:59.234 "uuid": "fb00bdff-ac9e-4017-9860-cc5acf47a691", 00:09:59.234 "zoned": false 00:09:59.234 } 00:09:59.234 ]' 00:09:59.234 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:59.234 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:59.234 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:59.234 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:59.234 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:59.234 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:59.234 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:59.234 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.492 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.492 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:59.492 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.492 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:59.492 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:01.391 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:01.649 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:02.583 ************************************ 00:10:02.583 START TEST filesystem_ext4 00:10:02.583 ************************************ 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:02.583 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:02.583 mke2fs 1.46.5 (30-Dec-2021) 00:10:02.583 Discarding device blocks: 0/522240 done 00:10:02.583 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:02.583 Filesystem UUID: 27130651-f092-477a-9c27-c044e1c50e3d 00:10:02.583 Superblock backups stored on blocks: 00:10:02.583 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:02.583 00:10:02.583 Allocating group tables: 0/64 done 00:10:02.583 Writing inode tables: 0/64 done 00:10:02.842 Creating journal (8192 blocks): done 00:10:02.842 Writing superblocks and filesystem accounting information: 0/64 done 00:10:02.842 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 72796 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:02.842 00:10:02.842 real 0m0.364s 00:10:02.842 user 0m0.021s 00:10:02.842 sys 0m0.052s 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.842 ************************************ 00:10:02.842 END TEST filesystem_ext4 00:10:02.842 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:02.842 ************************************ 00:10:03.100 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:03.100 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:03.100 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.100 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.100 ************************************ 00:10:03.100 START TEST filesystem_btrfs 00:10:03.100 ************************************ 00:10:03.100 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:03.100 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:03.101 btrfs-progs v6.6.2 00:10:03.101 See https://btrfs.readthedocs.io for more information. 00:10:03.101 00:10:03.101 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:03.101 NOTE: several default settings have changed in version 5.15, please make sure 00:10:03.101 this does not affect your deployments: 00:10:03.101 - DUP for metadata (-m dup) 00:10:03.101 - enabled no-holes (-O no-holes) 00:10:03.101 - enabled free-space-tree (-R free-space-tree) 00:10:03.101 00:10:03.101 Label: (null) 00:10:03.101 UUID: d3ea1aa1-9099-456f-82d0-73097d94a205 00:10:03.101 Node size: 16384 00:10:03.101 Sector size: 4096 00:10:03.101 Filesystem size: 510.00MiB 00:10:03.101 Block group profiles: 00:10:03.101 Data: single 8.00MiB 00:10:03.101 Metadata: DUP 32.00MiB 00:10:03.101 System: DUP 8.00MiB 00:10:03.101 SSD detected: yes 00:10:03.101 Zoned device: no 00:10:03.101 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:03.101 Runtime features: free-space-tree 00:10:03.101 Checksum: crc32c 00:10:03.101 Number of devices: 1 00:10:03.101 Devices: 00:10:03.101 ID SIZE PATH 00:10:03.101 1 510.00MiB /dev/nvme0n1p1 00:10:03.101 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:03.101 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:03.359 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:03.359 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:03.359 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:03.359 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:03.359 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:03.359 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 72796 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:03.359 00:10:03.359 real 0m0.275s 00:10:03.359 user 0m0.024s 00:10:03.359 sys 0m0.065s 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:03.359 ************************************ 00:10:03.359 END TEST filesystem_btrfs 00:10:03.359 ************************************ 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.359 ************************************ 00:10:03.359 START TEST filesystem_xfs 00:10:03.359 ************************************ 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:03.359 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:03.359 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:03.359 = sectsz=512 attr=2, projid32bit=1 00:10:03.359 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:03.359 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:03.359 data = bsize=4096 blocks=130560, imaxpct=25 00:10:03.359 = sunit=0 swidth=0 blks 00:10:03.359 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:03.359 log =internal log bsize=4096 blocks=16384, version=2 00:10:03.359 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:03.359 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:04.296 Discarding blocks...Done. 00:10:04.296 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:04.296 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 72796 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:06.827 00:10:06.827 real 0m3.105s 00:10:06.827 user 0m0.027s 00:10:06.827 sys 0m0.049s 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:06.827 ************************************ 00:10:06.827 END TEST filesystem_xfs 00:10:06.827 ************************************ 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 72796 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 72796 ']' 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 72796 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72796 00:10:06.827 killing process with pid 72796 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72796' 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 72796 00:10:06.827 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 72796 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:07.106 00:10:07.106 real 0m9.171s 00:10:07.106 user 0m34.577s 00:10:07.106 sys 0m1.608s 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.106 ************************************ 00:10:07.106 END TEST nvmf_filesystem_no_in_capsule 00:10:07.106 ************************************ 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:07.106 ************************************ 00:10:07.106 START TEST nvmf_filesystem_in_capsule 00:10:07.106 ************************************ 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=73104 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 73104 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 73104 ']' 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.106 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.106 [2024-07-25 12:17:21.868404] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:10:07.106 [2024-07-25 12:17:21.868490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.386 [2024-07-25 12:17:22.002197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.386 [2024-07-25 12:17:22.105322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.386 [2024-07-25 12:17:22.105569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.386 [2024-07-25 12:17:22.105810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.386 [2024-07-25 12:17:22.105951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.386 [2024-07-25 12:17:22.106055] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.386 [2024-07-25 12:17:22.106214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.386 [2024-07-25 12:17:22.106378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.386 [2024-07-25 12:17:22.106735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.386 [2024-07-25 12:17:22.106740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.321 [2024-07-25 12:17:22.866191] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.321 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.321 Malloc1 00:10:08.321 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.321 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.322 [2024-07-25 12:17:23.066109] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:08.322 { 00:10:08.322 "aliases": [ 00:10:08.322 "2c5c19bc-104a-46a9-8b40-ffdc77149321" 00:10:08.322 ], 00:10:08.322 "assigned_rate_limits": { 00:10:08.322 "r_mbytes_per_sec": 0, 00:10:08.322 "rw_ios_per_sec": 0, 00:10:08.322 "rw_mbytes_per_sec": 0, 00:10:08.322 "w_mbytes_per_sec": 0 00:10:08.322 }, 00:10:08.322 "block_size": 512, 00:10:08.322 "claim_type": "exclusive_write", 00:10:08.322 "claimed": true, 00:10:08.322 "driver_specific": {}, 00:10:08.322 "memory_domains": [ 00:10:08.322 { 00:10:08.322 "dma_device_id": "system", 00:10:08.322 "dma_device_type": 1 00:10:08.322 }, 00:10:08.322 { 00:10:08.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.322 "dma_device_type": 2 00:10:08.322 } 00:10:08.322 ], 00:10:08.322 "name": "Malloc1", 00:10:08.322 "num_blocks": 1048576, 00:10:08.322 "product_name": "Malloc disk", 00:10:08.322 "supported_io_types": { 00:10:08.322 "abort": true, 00:10:08.322 "compare": false, 00:10:08.322 "compare_and_write": false, 00:10:08.322 "copy": true, 00:10:08.322 "flush": true, 00:10:08.322 "get_zone_info": false, 00:10:08.322 "nvme_admin": false, 00:10:08.322 "nvme_io": false, 00:10:08.322 "nvme_io_md": false, 00:10:08.322 "nvme_iov_md": false, 00:10:08.322 "read": true, 00:10:08.322 "reset": true, 00:10:08.322 "seek_data": false, 00:10:08.322 "seek_hole": false, 00:10:08.322 "unmap": true, 00:10:08.322 "write": true, 00:10:08.322 "write_zeroes": true, 00:10:08.322 "zcopy": true, 00:10:08.322 "zone_append": false, 00:10:08.322 "zone_management": false 00:10:08.322 }, 00:10:08.322 "uuid": "2c5c19bc-104a-46a9-8b40-ffdc77149321", 00:10:08.322 "zoned": false 00:10:08.322 } 00:10:08.322 ]' 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:08.322 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:08.580 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:08.580 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:08.580 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:08.580 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:08.581 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.581 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.581 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:08.581 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.581 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:08.581 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:11.111 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.678 ************************************ 00:10:11.678 START TEST filesystem_in_capsule_ext4 00:10:11.678 ************************************ 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:11.678 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:11.678 mke2fs 1.46.5 (30-Dec-2021) 00:10:11.935 Discarding device blocks: 0/522240 done 00:10:11.935 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:11.935 Filesystem UUID: 1ab191c8-3d2d-40b8-902f-72b5facf7ca8 00:10:11.935 Superblock backups stored on blocks: 00:10:11.935 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:11.935 00:10:11.935 Allocating group tables: 0/64 done 00:10:11.935 Writing inode tables: 0/64 done 00:10:11.935 Creating journal (8192 blocks): done 00:10:11.935 Writing superblocks and filesystem accounting information: 0/64 done 00:10:11.935 00:10:11.936 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:11.936 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:11.936 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 73104 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.194 00:10:12.194 real 0m0.404s 00:10:12.194 user 0m0.028s 00:10:12.194 sys 0m0.051s 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:12.194 ************************************ 00:10:12.194 END TEST filesystem_in_capsule_ext4 00:10:12.194 ************************************ 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.194 ************************************ 00:10:12.194 START TEST filesystem_in_capsule_btrfs 00:10:12.194 ************************************ 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:12.194 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:12.453 btrfs-progs v6.6.2 00:10:12.453 See https://btrfs.readthedocs.io for more information. 00:10:12.453 00:10:12.453 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:12.453 NOTE: several default settings have changed in version 5.15, please make sure 00:10:12.453 this does not affect your deployments: 00:10:12.453 - DUP for metadata (-m dup) 00:10:12.453 - enabled no-holes (-O no-holes) 00:10:12.453 - enabled free-space-tree (-R free-space-tree) 00:10:12.453 00:10:12.453 Label: (null) 00:10:12.453 UUID: c89166a0-b026-4330-8f69-23f7bc908db4 00:10:12.453 Node size: 16384 00:10:12.453 Sector size: 4096 00:10:12.453 Filesystem size: 510.00MiB 00:10:12.453 Block group profiles: 00:10:12.453 Data: single 8.00MiB 00:10:12.453 Metadata: DUP 32.00MiB 00:10:12.453 System: DUP 8.00MiB 00:10:12.453 SSD detected: yes 00:10:12.453 Zoned device: no 00:10:12.453 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:12.453 Runtime features: free-space-tree 00:10:12.453 Checksum: crc32c 00:10:12.453 Number of devices: 1 00:10:12.453 Devices: 00:10:12.453 ID SIZE PATH 00:10:12.453 1 510.00MiB /dev/nvme0n1p1 00:10:12.453 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 73104 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.453 ************************************ 00:10:12.453 END TEST filesystem_in_capsule_btrfs 00:10:12.453 ************************************ 00:10:12.453 00:10:12.453 real 0m0.232s 00:10:12.453 user 0m0.020s 00:10:12.453 sys 0m0.073s 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.453 ************************************ 00:10:12.453 START TEST filesystem_in_capsule_xfs 00:10:12.453 ************************************ 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:12.453 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:12.712 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:12.712 = sectsz=512 attr=2, projid32bit=1 00:10:12.712 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:12.712 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:12.712 data = bsize=4096 blocks=130560, imaxpct=25 00:10:12.712 = sunit=0 swidth=0 blks 00:10:12.712 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:12.712 log =internal log bsize=4096 blocks=16384, version=2 00:10:12.712 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:12.712 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:13.278 Discarding blocks...Done. 00:10:13.278 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:13.278 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 73104 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:15.181 ************************************ 00:10:15.181 END TEST filesystem_in_capsule_xfs 00:10:15.181 ************************************ 00:10:15.181 00:10:15.181 real 0m2.635s 00:10:15.181 user 0m0.022s 00:10:15.181 sys 0m0.054s 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:15.181 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 73104 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 73104 ']' 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 73104 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73104 00:10:15.181 killing process with pid 73104 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73104' 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 73104 00:10:15.181 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 73104 00:10:15.747 ************************************ 00:10:15.747 END TEST nvmf_filesystem_in_capsule 00:10:15.747 ************************************ 00:10:15.747 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:15.747 00:10:15.747 real 0m8.686s 00:10:15.747 user 0m32.624s 00:10:15.747 sys 0m1.610s 00:10:15.747 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.747 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.747 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:15.747 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:15.747 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:15.747 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:15.747 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:15.747 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:15.747 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:15.747 rmmod nvme_tcp 00:10:15.747 rmmod nvme_fabrics 00:10:15.747 rmmod nvme_keyring 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:16.006 00:10:16.006 real 0m18.638s 00:10:16.006 user 1m7.426s 00:10:16.006 sys 0m3.591s 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.006 ************************************ 00:10:16.006 END TEST nvmf_filesystem 00:10:16.006 ************************************ 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:16.006 ************************************ 00:10:16.006 START TEST nvmf_target_discovery 00:10:16.006 ************************************ 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:16.006 * Looking for test storage... 00:10:16.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.006 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:16.007 Cannot find device "nvmf_tgt_br" 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.007 Cannot find device "nvmf_tgt_br2" 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:10:16.007 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:16.264 Cannot find device "nvmf_tgt_br" 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:16.264 Cannot find device "nvmf_tgt_br2" 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.264 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.264 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.264 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:16.265 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:16.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:10:16.523 00:10:16.523 --- 10.0.0.2 ping statistics --- 00:10:16.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.523 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:16.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:16.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:10:16.523 00:10:16.523 --- 10.0.0.3 ping statistics --- 00:10:16.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.523 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:16.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:16.523 00:10:16.523 --- 10.0.0.1 ping statistics --- 00:10:16.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.523 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=73558 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 73558 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 73558 ']' 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.523 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.523 [2024-07-25 12:17:31.217981] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:10:16.523 [2024-07-25 12:17:31.218403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.523 [2024-07-25 12:17:31.359085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.796 [2024-07-25 12:17:31.482408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.796 [2024-07-25 12:17:31.482760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.796 [2024-07-25 12:17:31.482963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.796 [2024-07-25 12:17:31.483130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.796 [2024-07-25 12:17:31.483174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.796 [2024-07-25 12:17:31.483478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.796 [2024-07-25 12:17:31.483612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.796 [2024-07-25 12:17:31.484231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.796 [2024-07-25 12:17:31.484246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.362 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.362 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:10:17.362 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:17.362 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.362 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.362 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.362 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.362 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 [2024-07-25 12:17:32.225433] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 Null1 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 [2024-07-25 12:17:32.286788] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 Null2 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 Null3 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 Null4 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.622 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -a 10.0.0.2 -s 4420 00:10:17.623 00:10:17.623 Discovery Log Number of Records 6, Generation counter 6 00:10:17.623 =====Discovery Log Entry 0====== 00:10:17.623 trtype: tcp 00:10:17.623 adrfam: ipv4 00:10:17.623 subtype: current discovery subsystem 00:10:17.623 treq: not required 00:10:17.623 portid: 0 00:10:17.623 trsvcid: 4420 00:10:17.623 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:17.623 traddr: 10.0.0.2 00:10:17.623 eflags: explicit discovery connections, duplicate discovery information 00:10:17.623 sectype: none 00:10:17.623 =====Discovery Log Entry 1====== 00:10:17.623 trtype: tcp 00:10:17.623 adrfam: ipv4 00:10:17.623 subtype: nvme subsystem 00:10:17.623 treq: not required 00:10:17.623 portid: 0 00:10:17.623 trsvcid: 4420 00:10:17.623 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:17.623 traddr: 10.0.0.2 00:10:17.623 eflags: none 00:10:17.623 sectype: none 00:10:17.623 =====Discovery Log Entry 2====== 00:10:17.623 trtype: tcp 00:10:17.623 adrfam: ipv4 00:10:17.623 subtype: nvme subsystem 00:10:17.623 treq: not required 00:10:17.623 portid: 0 00:10:17.623 trsvcid: 4420 00:10:17.623 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:17.623 traddr: 10.0.0.2 00:10:17.623 eflags: none 00:10:17.623 sectype: none 00:10:17.623 =====Discovery Log Entry 3====== 00:10:17.623 trtype: tcp 00:10:17.623 adrfam: ipv4 00:10:17.623 subtype: nvme subsystem 00:10:17.623 treq: not required 00:10:17.623 portid: 0 00:10:17.623 trsvcid: 4420 00:10:17.623 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:17.623 traddr: 10.0.0.2 00:10:17.623 eflags: none 00:10:17.623 sectype: none 00:10:17.623 =====Discovery Log Entry 4====== 00:10:17.623 trtype: tcp 00:10:17.623 adrfam: ipv4 00:10:17.623 subtype: nvme subsystem 00:10:17.623 treq: not required 00:10:17.623 portid: 0 00:10:17.623 trsvcid: 4420 00:10:17.623 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:17.623 traddr: 10.0.0.2 00:10:17.623 eflags: none 00:10:17.623 sectype: none 00:10:17.623 =====Discovery Log Entry 5====== 00:10:17.623 trtype: tcp 00:10:17.623 adrfam: ipv4 00:10:17.623 subtype: discovery subsystem referral 00:10:17.623 treq: not required 00:10:17.623 portid: 0 00:10:17.623 trsvcid: 4430 00:10:17.623 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:17.623 traddr: 10.0.0.2 00:10:17.623 eflags: none 00:10:17.623 sectype: none 00:10:17.623 Perform nvmf subsystem discovery via RPC 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.623 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.623 [ 00:10:17.623 { 00:10:17.623 "allow_any_host": true, 00:10:17.623 "hosts": [], 00:10:17.623 "listen_addresses": [ 00:10:17.623 { 00:10:17.623 "adrfam": "IPv4", 00:10:17.623 "traddr": "10.0.0.2", 00:10:17.623 "trsvcid": "4420", 00:10:17.623 "trtype": "TCP" 00:10:17.623 } 00:10:17.623 ], 00:10:17.623 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:17.623 "subtype": "Discovery" 00:10:17.623 }, 00:10:17.623 { 00:10:17.623 "allow_any_host": true, 00:10:17.623 "hosts": [], 00:10:17.623 "listen_addresses": [ 00:10:17.623 { 00:10:17.623 "adrfam": "IPv4", 00:10:17.623 "traddr": "10.0.0.2", 00:10:17.623 "trsvcid": "4420", 00:10:17.623 "trtype": "TCP" 00:10:17.623 } 00:10:17.623 ], 00:10:17.623 "max_cntlid": 65519, 00:10:17.881 "max_namespaces": 32, 00:10:17.881 "min_cntlid": 1, 00:10:17.881 "model_number": "SPDK bdev Controller", 00:10:17.881 "namespaces": [ 00:10:17.881 { 00:10:17.881 "bdev_name": "Null1", 00:10:17.881 "name": "Null1", 00:10:17.881 "nguid": "FA76F07194434FB5ABEF8DBE4D6EAF6D", 00:10:17.881 "nsid": 1, 00:10:17.881 "uuid": "fa76f071-9443-4fb5-abef-8dbe4d6eaf6d" 00:10:17.881 } 00:10:17.881 ], 00:10:17.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.881 "serial_number": "SPDK00000000000001", 00:10:17.881 "subtype": "NVMe" 00:10:17.881 }, 00:10:17.881 { 00:10:17.881 "allow_any_host": true, 00:10:17.881 "hosts": [], 00:10:17.881 "listen_addresses": [ 00:10:17.881 { 00:10:17.881 "adrfam": "IPv4", 00:10:17.881 "traddr": "10.0.0.2", 00:10:17.881 "trsvcid": "4420", 00:10:17.881 "trtype": "TCP" 00:10:17.881 } 00:10:17.881 ], 00:10:17.881 "max_cntlid": 65519, 00:10:17.881 "max_namespaces": 32, 00:10:17.881 "min_cntlid": 1, 00:10:17.881 "model_number": "SPDK bdev Controller", 00:10:17.881 "namespaces": [ 00:10:17.881 { 00:10:17.881 "bdev_name": "Null2", 00:10:17.881 "name": "Null2", 00:10:17.881 "nguid": "3B1A5CEAAAFF4B649994564FF73EE885", 00:10:17.881 "nsid": 1, 00:10:17.881 "uuid": "3b1a5cea-aaff-4b64-9994-564ff73ee885" 00:10:17.881 } 00:10:17.881 ], 00:10:17.882 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:17.882 "serial_number": "SPDK00000000000002", 00:10:17.882 "subtype": "NVMe" 00:10:17.882 }, 00:10:17.882 { 00:10:17.882 "allow_any_host": true, 00:10:17.882 "hosts": [], 00:10:17.882 "listen_addresses": [ 00:10:17.882 { 00:10:17.882 "adrfam": "IPv4", 00:10:17.882 "traddr": "10.0.0.2", 00:10:17.882 "trsvcid": "4420", 00:10:17.882 "trtype": "TCP" 00:10:17.882 } 00:10:17.882 ], 00:10:17.882 "max_cntlid": 65519, 00:10:17.882 "max_namespaces": 32, 00:10:17.882 "min_cntlid": 1, 00:10:17.882 "model_number": "SPDK bdev Controller", 00:10:17.882 "namespaces": [ 00:10:17.882 { 00:10:17.882 "bdev_name": "Null3", 00:10:17.882 "name": "Null3", 00:10:17.882 "nguid": "7F137A819585491591E61A62A3B663DC", 00:10:17.882 "nsid": 1, 00:10:17.882 "uuid": "7f137a81-9585-4915-91e6-1a62a3b663dc" 00:10:17.882 } 00:10:17.882 ], 00:10:17.882 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:17.882 "serial_number": "SPDK00000000000003", 00:10:17.882 "subtype": "NVMe" 00:10:17.882 }, 00:10:17.882 { 00:10:17.882 "allow_any_host": true, 00:10:17.882 "hosts": [], 00:10:17.882 "listen_addresses": [ 00:10:17.882 { 00:10:17.882 "adrfam": "IPv4", 00:10:17.882 "traddr": "10.0.0.2", 00:10:17.882 "trsvcid": "4420", 00:10:17.882 "trtype": "TCP" 00:10:17.882 } 00:10:17.882 ], 00:10:17.882 "max_cntlid": 65519, 00:10:17.882 "max_namespaces": 32, 00:10:17.882 "min_cntlid": 1, 00:10:17.882 "model_number": "SPDK bdev Controller", 00:10:17.882 "namespaces": [ 00:10:17.882 { 00:10:17.882 "bdev_name": "Null4", 00:10:17.882 "name": "Null4", 00:10:17.882 "nguid": "B9C8495C8D1441779D3C66605016E261", 00:10:17.882 "nsid": 1, 00:10:17.882 "uuid": "b9c8495c-8d14-4177-9d3c-66605016e261" 00:10:17.882 } 00:10:17.882 ], 00:10:17.882 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:17.882 "serial_number": "SPDK00000000000004", 00:10:17.882 "subtype": "NVMe" 00:10:17.882 } 00:10:17.882 ] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.882 rmmod nvme_tcp 00:10:17.882 rmmod nvme_fabrics 00:10:17.882 rmmod nvme_keyring 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 73558 ']' 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 73558 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 73558 ']' 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 73558 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73558 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:17.882 killing process with pid 73558 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73558' 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 73558 00:10:17.882 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 73558 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:18.140 ************************************ 00:10:18.140 END TEST nvmf_target_discovery 00:10:18.140 ************************************ 00:10:18.140 00:10:18.140 real 0m2.285s 00:10:18.140 user 0m6.121s 00:10:18.140 sys 0m0.580s 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.140 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:18.399 ************************************ 00:10:18.399 START TEST nvmf_referrals 00:10:18.399 ************************************ 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:18.399 * Looking for test storage... 00:10:18.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.399 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:18.400 Cannot find device "nvmf_tgt_br" 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.400 Cannot find device "nvmf_tgt_br2" 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:18.400 Cannot find device "nvmf_tgt_br" 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:18.400 Cannot find device "nvmf_tgt_br2" 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:18.400 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:18.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:10:18.664 00:10:18.664 --- 10.0.0.2 ping statistics --- 00:10:18.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.664 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:18.664 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.664 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:10:18.664 00:10:18.664 --- 10.0.0.3 ping statistics --- 00:10:18.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.664 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:18.664 00:10:18.664 --- 10.0.0.1 ping statistics --- 00:10:18.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.664 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:18.664 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=73781 00:10:18.665 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.665 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 73781 00:10:18.665 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 73781 ']' 00:10:18.665 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.665 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:18.665 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.665 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:18.665 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:18.927 [2024-07-25 12:17:33.528000] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:10:18.927 [2024-07-25 12:17:33.528080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.927 [2024-07-25 12:17:33.660998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.927 [2024-07-25 12:17:33.779607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.927 [2024-07-25 12:17:33.779669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.927 [2024-07-25 12:17:33.779682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.927 [2024-07-25 12:17:33.779706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.927 [2024-07-25 12:17:33.779714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.927 [2024-07-25 12:17:33.779875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.927 [2024-07-25 12:17:33.780481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.927 [2024-07-25 12:17:33.780551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.927 [2024-07-25 12:17:33.780558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.864 [2024-07-25 12:17:34.618012] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.864 [2024-07-25 12:17:34.642241] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.864 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:19.865 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.122 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:20.122 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:20.122 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:20.122 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:20.122 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.122 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.122 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:20.122 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:20.122 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.122 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:20.123 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.381 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:20.640 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:20.898 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:21.157 rmmod nvme_tcp 00:10:21.157 rmmod nvme_fabrics 00:10:21.157 rmmod nvme_keyring 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 73781 ']' 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 73781 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 73781 ']' 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 73781 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73781 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73781' 00:10:21.157 killing process with pid 73781 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 73781 00:10:21.157 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 73781 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:21.416 ************************************ 00:10:21.416 END TEST nvmf_referrals 00:10:21.416 ************************************ 00:10:21.416 00:10:21.416 real 0m3.067s 00:10:21.416 user 0m10.052s 00:10:21.416 sys 0m0.826s 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:21.416 ************************************ 00:10:21.416 START TEST nvmf_connect_disconnect 00:10:21.416 ************************************ 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:21.416 * Looking for test storage... 00:10:21.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.416 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:21.417 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:21.676 Cannot find device "nvmf_tgt_br" 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.676 Cannot find device "nvmf_tgt_br2" 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:21.676 Cannot find device "nvmf_tgt_br" 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:21.676 Cannot find device "nvmf_tgt_br2" 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:21.676 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:21.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:21.935 00:10:21.935 --- 10.0.0.2 ping statistics --- 00:10:21.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.935 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:21.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:21.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:10:21.935 00:10:21.935 --- 10.0.0.3 ping statistics --- 00:10:21.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.935 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:21.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:21.935 00:10:21.935 --- 10.0.0.1 ping statistics --- 00:10:21.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.935 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=74083 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 74083 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 74083 ']' 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.935 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.935 [2024-07-25 12:17:36.633588] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:10:21.935 [2024-07-25 12:17:36.633730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.935 [2024-07-25 12:17:36.770575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.193 [2024-07-25 12:17:36.887112] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.193 [2024-07-25 12:17:36.887165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.193 [2024-07-25 12:17:36.887193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.193 [2024-07-25 12:17:36.887201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.193 [2024-07-25 12:17:36.887208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.193 [2024-07-25 12:17:36.887393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.193 [2024-07-25 12:17:36.887544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.193 [2024-07-25 12:17:36.888220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.193 [2024-07-25 12:17:36.888249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:23.129 [2024-07-25 12:17:37.726420] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:23.129 [2024-07-25 12:17:37.803105] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:23.129 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:25.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:34.550 rmmod nvme_tcp 00:10:34.550 rmmod nvme_fabrics 00:10:34.550 rmmod nvme_keyring 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 74083 ']' 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 74083 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 74083 ']' 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 74083 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74083 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.550 killing process with pid 74083 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74083' 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 74083 00:10:34.550 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 74083 00:10:34.809 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:34.809 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:34.809 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:34.809 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.809 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.809 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.809 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.809 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.809 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:34.809 00:10:34.809 real 0m13.302s 00:10:34.809 user 0m49.044s 00:10:34.809 sys 0m1.851s 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.810 ************************************ 00:10:34.810 END TEST nvmf_connect_disconnect 00:10:34.810 ************************************ 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:34.810 ************************************ 00:10:34.810 START TEST nvmf_multitarget 00:10:34.810 ************************************ 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:34.810 * Looking for test storage... 00:10:34.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:34.810 Cannot find device "nvmf_tgt_br" 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:10:34.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:34.810 Cannot find device "nvmf_tgt_br2" 00:10:34.811 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:10:34.811 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:34.811 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:34.811 Cannot find device "nvmf_tgt_br" 00:10:34.811 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:10:34.811 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:34.811 Cannot find device "nvmf_tgt_br2" 00:10:34.811 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:10:34.811 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:35.069 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:35.327 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:35.327 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.327 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:35.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:35.327 00:10:35.327 --- 10.0.0.2 ping statistics --- 00:10:35.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.327 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:35.327 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:35.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:10:35.327 00:10:35.327 --- 10.0.0.3 ping statistics --- 00:10:35.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.327 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:35.327 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:35.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:35.328 00:10:35.328 --- 10.0.0.1 ping statistics --- 00:10:35.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.328 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=74481 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 74481 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 74481 ']' 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.328 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:35.328 [2024-07-25 12:17:50.030735] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:10:35.328 [2024-07-25 12:17:50.030856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.328 [2024-07-25 12:17:50.164930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.586 [2024-07-25 12:17:50.266803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.586 [2024-07-25 12:17:50.266880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.586 [2024-07-25 12:17:50.266907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.586 [2024-07-25 12:17:50.266915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.586 [2024-07-25 12:17:50.266922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.586 [2024-07-25 12:17:50.267703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.586 [2024-07-25 12:17:50.267944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.586 [2024-07-25 12:17:50.268071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.586 [2024-07-25 12:17:50.268076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.152 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.152 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:10:36.152 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.152 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.152 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:36.152 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.152 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:36.152 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:36.152 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:36.411 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:36.411 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:36.411 "nvmf_tgt_1" 00:10:36.411 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:36.669 "nvmf_tgt_2" 00:10:36.669 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:36.669 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:36.669 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:36.669 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:36.928 true 00:10:36.928 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:36.928 true 00:10:36.928 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:36.928 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:37.187 rmmod nvme_tcp 00:10:37.187 rmmod nvme_fabrics 00:10:37.187 rmmod nvme_keyring 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 74481 ']' 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 74481 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 74481 ']' 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 74481 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74481 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.187 killing process with pid 74481 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74481' 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 74481 00:10:37.187 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 74481 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:37.446 00:10:37.446 real 0m2.768s 00:10:37.446 user 0m8.842s 00:10:37.446 sys 0m0.707s 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.446 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:37.446 ************************************ 00:10:37.446 END TEST nvmf_multitarget 00:10:37.446 ************************************ 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.705 ************************************ 00:10:37.705 START TEST nvmf_rpc 00:10:37.705 ************************************ 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:37.705 * Looking for test storage... 00:10:37.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.705 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:37.706 Cannot find device "nvmf_tgt_br" 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.706 Cannot find device "nvmf_tgt_br2" 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:37.706 Cannot find device "nvmf_tgt_br" 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:37.706 Cannot find device "nvmf_tgt_br2" 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:37.706 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:37.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:10:37.980 00:10:37.980 --- 10.0.0.2 ping statistics --- 00:10:37.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.980 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:37.980 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.980 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:10:37.980 00:10:37.980 --- 10.0.0.3 ping statistics --- 00:10:37.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.980 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:37.980 00:10:37.980 --- 10.0.0.1 ping statistics --- 00:10:37.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.980 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.980 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=74711 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 74711 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 74711 ']' 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.981 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.981 [2024-07-25 12:17:52.824754] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:10:37.981 [2024-07-25 12:17:52.824857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.273 [2024-07-25 12:17:52.959688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.273 [2024-07-25 12:17:53.065939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.273 [2024-07-25 12:17:53.066024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.273 [2024-07-25 12:17:53.066041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.273 [2024-07-25 12:17:53.066054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.273 [2024-07-25 12:17:53.066064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.273 [2024-07-25 12:17:53.066238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.273 [2024-07-25 12:17:53.066432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.273 [2024-07-25 12:17:53.066719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.273 [2024-07-25 12:17:53.066729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:39.208 "poll_groups": [ 00:10:39.208 { 00:10:39.208 "admin_qpairs": 0, 00:10:39.208 "completed_nvme_io": 0, 00:10:39.208 "current_admin_qpairs": 0, 00:10:39.208 "current_io_qpairs": 0, 00:10:39.208 "io_qpairs": 0, 00:10:39.208 "name": "nvmf_tgt_poll_group_000", 00:10:39.208 "pending_bdev_io": 0, 00:10:39.208 "transports": [] 00:10:39.208 }, 00:10:39.208 { 00:10:39.208 "admin_qpairs": 0, 00:10:39.208 "completed_nvme_io": 0, 00:10:39.208 "current_admin_qpairs": 0, 00:10:39.208 "current_io_qpairs": 0, 00:10:39.208 "io_qpairs": 0, 00:10:39.208 "name": "nvmf_tgt_poll_group_001", 00:10:39.208 "pending_bdev_io": 0, 00:10:39.208 "transports": [] 00:10:39.208 }, 00:10:39.208 { 00:10:39.208 "admin_qpairs": 0, 00:10:39.208 "completed_nvme_io": 0, 00:10:39.208 "current_admin_qpairs": 0, 00:10:39.208 "current_io_qpairs": 0, 00:10:39.208 "io_qpairs": 0, 00:10:39.208 "name": "nvmf_tgt_poll_group_002", 00:10:39.208 "pending_bdev_io": 0, 00:10:39.208 "transports": [] 00:10:39.208 }, 00:10:39.208 { 00:10:39.208 "admin_qpairs": 0, 00:10:39.208 "completed_nvme_io": 0, 00:10:39.208 "current_admin_qpairs": 0, 00:10:39.208 "current_io_qpairs": 0, 00:10:39.208 "io_qpairs": 0, 00:10:39.208 "name": "nvmf_tgt_poll_group_003", 00:10:39.208 "pending_bdev_io": 0, 00:10:39.208 "transports": [] 00:10:39.208 } 00:10:39.208 ], 00:10:39.208 "tick_rate": 2200000000 00:10:39.208 }' 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:39.208 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:39.208 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:39.208 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.208 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.208 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.208 [2024-07-25 12:17:54.034617] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.208 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.208 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:39.208 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.208 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.467 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:39.468 "poll_groups": [ 00:10:39.468 { 00:10:39.468 "admin_qpairs": 0, 00:10:39.468 "completed_nvme_io": 0, 00:10:39.468 "current_admin_qpairs": 0, 00:10:39.468 "current_io_qpairs": 0, 00:10:39.468 "io_qpairs": 0, 00:10:39.468 "name": "nvmf_tgt_poll_group_000", 00:10:39.468 "pending_bdev_io": 0, 00:10:39.468 "transports": [ 00:10:39.468 { 00:10:39.468 "trtype": "TCP" 00:10:39.468 } 00:10:39.468 ] 00:10:39.468 }, 00:10:39.468 { 00:10:39.468 "admin_qpairs": 0, 00:10:39.468 "completed_nvme_io": 0, 00:10:39.468 "current_admin_qpairs": 0, 00:10:39.468 "current_io_qpairs": 0, 00:10:39.468 "io_qpairs": 0, 00:10:39.468 "name": "nvmf_tgt_poll_group_001", 00:10:39.468 "pending_bdev_io": 0, 00:10:39.468 "transports": [ 00:10:39.468 { 00:10:39.468 "trtype": "TCP" 00:10:39.468 } 00:10:39.468 ] 00:10:39.468 }, 00:10:39.468 { 00:10:39.468 "admin_qpairs": 0, 00:10:39.468 "completed_nvme_io": 0, 00:10:39.468 "current_admin_qpairs": 0, 00:10:39.468 "current_io_qpairs": 0, 00:10:39.468 "io_qpairs": 0, 00:10:39.468 "name": "nvmf_tgt_poll_group_002", 00:10:39.468 "pending_bdev_io": 0, 00:10:39.468 "transports": [ 00:10:39.468 { 00:10:39.468 "trtype": "TCP" 00:10:39.468 } 00:10:39.468 ] 00:10:39.468 }, 00:10:39.468 { 00:10:39.468 "admin_qpairs": 0, 00:10:39.468 "completed_nvme_io": 0, 00:10:39.468 "current_admin_qpairs": 0, 00:10:39.468 "current_io_qpairs": 0, 00:10:39.468 "io_qpairs": 0, 00:10:39.468 "name": "nvmf_tgt_poll_group_003", 00:10:39.468 "pending_bdev_io": 0, 00:10:39.468 "transports": [ 00:10:39.468 { 00:10:39.468 "trtype": "TCP" 00:10:39.468 } 00:10:39.468 ] 00:10:39.468 } 00:10:39.468 ], 00:10:39.468 "tick_rate": 2200000000 00:10:39.468 }' 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.468 Malloc1 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.468 [2024-07-25 12:17:54.233112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -a 10.0.0.2 -s 4420 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -a 10.0.0.2 -s 4420 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -a 10.0.0.2 -s 4420 00:10:39.468 [2024-07-25 12:17:54.261395] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2' 00:10:39.468 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:39.468 could not add new controller: failed to write to nvme-fabrics device 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.468 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:39.727 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.727 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:39.727 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.727 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:39.727 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:41.628 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:41.628 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:41.628 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.628 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:41.628 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.628 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:41.628 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.887 [2024-07-25 12:17:56.552285] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2' 00:10:41.887 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:41.887 could not add new controller: failed to write to nvme-fabrics device 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:41.887 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:44.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.417 [2024-07-25 12:17:58.954874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.417 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:44.417 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.417 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:44.417 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.417 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:44.417 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:46.318 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:46.318 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:46.318 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.318 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:46.318 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.318 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:46.318 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.577 [2024-07-25 12:18:01.366148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.577 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.835 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.835 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:46.835 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.835 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:46.835 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:48.734 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:48.734 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:48.734 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.734 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:48.734 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.734 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:48.734 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.992 [2024-07-25 12:18:03.769690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.992 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.250 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.250 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:49.250 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.250 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:49.250 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:51.184 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:51.184 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:51.184 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.184 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:51.184 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.184 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:51.184 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.442 [2024-07-25 12:18:06.176804] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.442 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.700 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.700 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:51.700 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.700 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:51.700 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.600 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.858 [2024-07-25 12:18:08.468016] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:53.858 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 [2024-07-25 12:18:10.779334] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 [2024-07-25 12:18:10.827348] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 [2024-07-25 12:18:10.875378] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.391 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 [2024-07-25 12:18:10.923374] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 [2024-07-25 12:18:10.971448] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:56.392 "poll_groups": [ 00:10:56.392 { 00:10:56.392 "admin_qpairs": 2, 00:10:56.392 "completed_nvme_io": 164, 00:10:56.392 "current_admin_qpairs": 0, 00:10:56.392 "current_io_qpairs": 0, 00:10:56.392 "io_qpairs": 16, 00:10:56.392 "name": "nvmf_tgt_poll_group_000", 00:10:56.392 "pending_bdev_io": 0, 00:10:56.392 "transports": [ 00:10:56.392 { 00:10:56.392 "trtype": "TCP" 00:10:56.392 } 00:10:56.392 ] 00:10:56.392 }, 00:10:56.392 { 00:10:56.392 "admin_qpairs": 3, 00:10:56.392 "completed_nvme_io": 67, 00:10:56.392 "current_admin_qpairs": 0, 00:10:56.392 "current_io_qpairs": 0, 00:10:56.392 "io_qpairs": 17, 00:10:56.392 "name": "nvmf_tgt_poll_group_001", 00:10:56.392 "pending_bdev_io": 0, 00:10:56.392 "transports": [ 00:10:56.392 { 00:10:56.392 "trtype": "TCP" 00:10:56.392 } 00:10:56.392 ] 00:10:56.392 }, 00:10:56.392 { 00:10:56.392 "admin_qpairs": 1, 00:10:56.392 "completed_nvme_io": 70, 00:10:56.392 "current_admin_qpairs": 0, 00:10:56.392 "current_io_qpairs": 0, 00:10:56.392 "io_qpairs": 19, 00:10:56.392 "name": "nvmf_tgt_poll_group_002", 00:10:56.392 "pending_bdev_io": 0, 00:10:56.392 "transports": [ 00:10:56.392 { 00:10:56.392 "trtype": "TCP" 00:10:56.392 } 00:10:56.392 ] 00:10:56.392 }, 00:10:56.392 { 00:10:56.392 "admin_qpairs": 1, 00:10:56.392 "completed_nvme_io": 119, 00:10:56.392 "current_admin_qpairs": 0, 00:10:56.392 "current_io_qpairs": 0, 00:10:56.392 "io_qpairs": 18, 00:10:56.392 "name": "nvmf_tgt_poll_group_003", 00:10:56.392 "pending_bdev_io": 0, 00:10:56.392 "transports": [ 00:10:56.392 { 00:10:56.392 "trtype": "TCP" 00:10:56.392 } 00:10:56.392 ] 00:10:56.392 } 00:10:56.392 ], 00:10:56.392 "tick_rate": 2200000000 00:10:56.392 }' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.392 rmmod nvme_tcp 00:10:56.392 rmmod nvme_fabrics 00:10:56.392 rmmod nvme_keyring 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 74711 ']' 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 74711 00:10:56.392 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 74711 ']' 00:10:56.393 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 74711 00:10:56.393 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:10:56.393 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.393 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74711 00:10:56.393 killing process with pid 74711 00:10:56.393 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.393 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.393 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74711' 00:10:56.393 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 74711 00:10:56.393 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 74711 00:10:56.651 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.651 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.651 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.651 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.651 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.651 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.651 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.651 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:56.910 00:10:56.910 real 0m19.216s 00:10:56.910 user 1m12.332s 00:10:56.910 sys 0m2.662s 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.910 ************************************ 00:10:56.910 END TEST nvmf_rpc 00:10:56.910 ************************************ 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:56.910 ************************************ 00:10:56.910 START TEST nvmf_invalid 00:10:56.910 ************************************ 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:56.910 * Looking for test storage... 00:10:56.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.910 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:56.911 Cannot find device "nvmf_tgt_br" 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:56.911 Cannot find device "nvmf_tgt_br2" 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:56.911 Cannot find device "nvmf_tgt_br" 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:56.911 Cannot find device "nvmf_tgt_br2" 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:10:56.911 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:57.170 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:57.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:57.170 00:10:57.170 --- 10.0.0.2 ping statistics --- 00:10:57.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.170 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:57.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:57.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:10:57.170 00:10:57.170 --- 10.0.0.3 ping statistics --- 00:10:57.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.170 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:57.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:10:57.170 00:10:57.170 --- 10.0.0.1 ping statistics --- 00:10:57.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.170 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:57.170 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=75223 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 75223 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 75223 ']' 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:57.430 [2024-07-25 12:18:12.102895] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:10:57.430 [2024-07-25 12:18:12.103015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.430 [2024-07-25 12:18:12.240937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.688 [2024-07-25 12:18:12.376396] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.688 [2024-07-25 12:18:12.376462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.688 [2024-07-25 12:18:12.376473] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.688 [2024-07-25 12:18:12.376480] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.688 [2024-07-25 12:18:12.376486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.688 [2024-07-25 12:18:12.376685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.688 [2024-07-25 12:18:12.376995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.688 [2024-07-25 12:18:12.377697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.688 [2024-07-25 12:18:12.377775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.253 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.253 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:10:58.253 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:58.253 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:58.253 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:58.511 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.511 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:58.511 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30999 00:10:58.770 [2024-07-25 12:18:13.397119] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:58.770 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/25 12:18:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30999 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:10:58.770 request: 00:10:58.770 { 00:10:58.770 "method": "nvmf_create_subsystem", 00:10:58.770 "params": { 00:10:58.770 "nqn": "nqn.2016-06.io.spdk:cnode30999", 00:10:58.770 "tgt_name": "foobar" 00:10:58.770 } 00:10:58.770 } 00:10:58.770 Got JSON-RPC error response 00:10:58.770 GoRPCClient: error on JSON-RPC call' 00:10:58.770 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/25 12:18:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30999 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:10:58.770 request: 00:10:58.770 { 00:10:58.770 "method": "nvmf_create_subsystem", 00:10:58.770 "params": { 00:10:58.770 "nqn": "nqn.2016-06.io.spdk:cnode30999", 00:10:58.770 "tgt_name": "foobar" 00:10:58.770 } 00:10:58.770 } 00:10:58.770 Got JSON-RPC error response 00:10:58.770 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:58.770 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:58.770 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18607 00:10:59.034 [2024-07-25 12:18:13.649389] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18607: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:59.034 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/25 12:18:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18607 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:10:59.034 request: 00:10:59.034 { 00:10:59.034 "method": "nvmf_create_subsystem", 00:10:59.034 "params": { 00:10:59.034 "nqn": "nqn.2016-06.io.spdk:cnode18607", 00:10:59.034 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:10:59.034 } 00:10:59.034 } 00:10:59.034 Got JSON-RPC error response 00:10:59.034 GoRPCClient: error on JSON-RPC call' 00:10:59.034 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/25 12:18:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18607 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:10:59.034 request: 00:10:59.034 { 00:10:59.034 "method": "nvmf_create_subsystem", 00:10:59.034 "params": { 00:10:59.034 "nqn": "nqn.2016-06.io.spdk:cnode18607", 00:10:59.034 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:10:59.034 } 00:10:59.034 } 00:10:59.034 Got JSON-RPC error response 00:10:59.034 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:59.034 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:59.034 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29410 00:10:59.322 [2024-07-25 12:18:13.953650] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29410: invalid model number 'SPDK_Controller' 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/25 12:18:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode29410], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:10:59.322 request: 00:10:59.322 { 00:10:59.322 "method": "nvmf_create_subsystem", 00:10:59.322 "params": { 00:10:59.322 "nqn": "nqn.2016-06.io.spdk:cnode29410", 00:10:59.322 "model_number": "SPDK_Controller\u001f" 00:10:59.322 } 00:10:59.322 } 00:10:59.322 Got JSON-RPC error response 00:10:59.322 GoRPCClient: error on JSON-RPC call' 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/25 12:18:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode29410], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:10:59.322 request: 00:10:59.322 { 00:10:59.322 "method": "nvmf_create_subsystem", 00:10:59.322 "params": { 00:10:59.322 "nqn": "nqn.2016-06.io.spdk:cnode29410", 00:10:59.322 "model_number": "SPDK_Controller\u001f" 00:10:59.322 } 00:10:59.322 } 00:10:59.322 Got JSON-RPC error response 00:10:59.322 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:59.322 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.322 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ E == \- ]] 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'EZ":k{fC!/3=l?|c|;2c6' 00:10:59.323 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'EZ":k{fC!/3=l?|c|;2c6' nqn.2016-06.io.spdk:cnode30515 00:10:59.583 [2024-07-25 12:18:14.318006] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30515: invalid serial number 'EZ":k{fC!/3=l?|c|;2c6' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/25 12:18:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30515 serial_number:EZ":k{fC!/3=l?|c|;2c6], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN EZ":k{fC!/3=l?|c|;2c6 00:10:59.583 request: 00:10:59.583 { 00:10:59.583 "method": "nvmf_create_subsystem", 00:10:59.583 "params": { 00:10:59.583 "nqn": "nqn.2016-06.io.spdk:cnode30515", 00:10:59.583 "serial_number": "EZ\":k{fC!/3=l?|c|;2c6" 00:10:59.583 } 00:10:59.583 } 00:10:59.583 Got JSON-RPC error response 00:10:59.583 GoRPCClient: error on JSON-RPC call' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/25 12:18:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30515 serial_number:EZ":k{fC!/3=l?|c|;2c6], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN EZ":k{fC!/3=l?|c|;2c6 00:10:59.583 request: 00:10:59.583 { 00:10:59.583 "method": "nvmf_create_subsystem", 00:10:59.583 "params": { 00:10:59.583 "nqn": "nqn.2016-06.io.spdk:cnode30515", 00:10:59.583 "serial_number": "EZ\":k{fC!/3=l?|c|;2c6" 00:10:59.583 } 00:10:59.583 } 00:10:59.583 Got JSON-RPC error response 00:10:59.583 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:59.583 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:59.584 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.584 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.584 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:59.584 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:59.584 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:59.584 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.584 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:59.843 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '}?t )&*WY3_uWZ_B '\''SU4>|t]ojd+e"^lH?aMZIq' 00:10:59.844 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '}?t )&*WY3_uWZ_B '\''SU4>|t]ojd+e"^lH?aMZIq' nqn.2016-06.io.spdk:cnode21533 00:11:00.101 [2024-07-25 12:18:14.798404] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21533: invalid model number '}?t )&*WY3_uWZ_B 'SU4>|t]ojd+e"^lH?aMZIq' 00:11:00.101 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/25 12:18:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:}?t )&*WY3_uWZ_B '\''SU4>|t]ojd+e"^lH?aMZIq nqn:nqn.2016-06.io.spdk:cnode21533], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN }?t )&*WY3_uWZ_B '\''SU4>|t]ojd+e"^lH?aMZIq 00:11:00.101 request: 00:11:00.101 { 00:11:00.101 "method": "nvmf_create_subsystem", 00:11:00.101 "params": { 00:11:00.101 "nqn": "nqn.2016-06.io.spdk:cnode21533", 00:11:00.101 "model_number": "}?t )&*WY3_uWZ_B '\''SU4>\u007f|t]ojd+e\"^lH?aMZIq" 00:11:00.101 } 00:11:00.101 } 00:11:00.101 Got JSON-RPC error response 00:11:00.101 GoRPCClient: error on JSON-RPC call' 00:11:00.101 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/25 12:18:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:}?t )&*WY3_uWZ_B 'SU4>|t]ojd+e"^lH?aMZIq nqn:nqn.2016-06.io.spdk:cnode21533], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN }?t )&*WY3_uWZ_B 'SU4>|t]ojd+e"^lH?aMZIq 00:11:00.101 request: 00:11:00.101 { 00:11:00.101 "method": "nvmf_create_subsystem", 00:11:00.101 "params": { 00:11:00.101 "nqn": "nqn.2016-06.io.spdk:cnode21533", 00:11:00.101 "model_number": "}?t )&*WY3_uWZ_B 'SU4>\u007f|t]ojd+e\"^lH?aMZIq" 00:11:00.101 } 00:11:00.101 } 00:11:00.101 Got JSON-RPC error response 00:11:00.101 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:00.101 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:00.359 [2024-07-25 12:18:15.078695] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.359 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:00.619 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:00.619 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:00.619 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:00.619 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:00.619 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:00.877 [2024-07-25 12:18:15.614251] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:00.878 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/25 12:18:15 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:00.878 request: 00:11:00.878 { 00:11:00.878 "method": "nvmf_subsystem_remove_listener", 00:11:00.878 "params": { 00:11:00.878 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:00.878 "listen_address": { 00:11:00.878 "trtype": "tcp", 00:11:00.878 "traddr": "", 00:11:00.878 "trsvcid": "4421" 00:11:00.878 } 00:11:00.878 } 00:11:00.878 } 00:11:00.878 Got JSON-RPC error response 00:11:00.878 GoRPCClient: error on JSON-RPC call' 00:11:00.878 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/25 12:18:15 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:00.878 request: 00:11:00.878 { 00:11:00.878 "method": "nvmf_subsystem_remove_listener", 00:11:00.878 "params": { 00:11:00.878 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:00.878 "listen_address": { 00:11:00.878 "trtype": "tcp", 00:11:00.878 "traddr": "", 00:11:00.878 "trsvcid": "4421" 00:11:00.878 } 00:11:00.878 } 00:11:00.878 } 00:11:00.878 Got JSON-RPC error response 00:11:00.878 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:00.878 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5056 -i 0 00:11:01.136 [2024-07-25 12:18:15.842418] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5056: invalid cntlid range [0-65519] 00:11:01.136 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/25 12:18:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5056], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:01.136 request: 00:11:01.136 { 00:11:01.136 "method": "nvmf_create_subsystem", 00:11:01.136 "params": { 00:11:01.136 "nqn": "nqn.2016-06.io.spdk:cnode5056", 00:11:01.136 "min_cntlid": 0 00:11:01.136 } 00:11:01.136 } 00:11:01.136 Got JSON-RPC error response 00:11:01.136 GoRPCClient: error on JSON-RPC call' 00:11:01.136 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/25 12:18:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5056], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:01.136 request: 00:11:01.136 { 00:11:01.136 "method": "nvmf_create_subsystem", 00:11:01.136 "params": { 00:11:01.136 "nqn": "nqn.2016-06.io.spdk:cnode5056", 00:11:01.136 "min_cntlid": 0 00:11:01.136 } 00:11:01.136 } 00:11:01.136 Got JSON-RPC error response 00:11:01.136 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:01.136 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29334 -i 65520 00:11:01.541 [2024-07-25 12:18:16.088127] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29334: invalid cntlid range [65520-65519] 00:11:01.541 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/25 12:18:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29334], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:01.541 request: 00:11:01.541 { 00:11:01.541 "method": "nvmf_create_subsystem", 00:11:01.541 "params": { 00:11:01.541 "nqn": "nqn.2016-06.io.spdk:cnode29334", 00:11:01.541 "min_cntlid": 65520 00:11:01.541 } 00:11:01.541 } 00:11:01.541 Got JSON-RPC error response 00:11:01.541 GoRPCClient: error on JSON-RPC call' 00:11:01.541 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/25 12:18:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29334], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:01.541 request: 00:11:01.541 { 00:11:01.541 "method": "nvmf_create_subsystem", 00:11:01.541 "params": { 00:11:01.541 "nqn": "nqn.2016-06.io.spdk:cnode29334", 00:11:01.541 "min_cntlid": 65520 00:11:01.541 } 00:11:01.541 } 00:11:01.541 Got JSON-RPC error response 00:11:01.541 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:01.541 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24748 -I 0 00:11:01.839 [2024-07-25 12:18:16.388382] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24748: invalid cntlid range [1-0] 00:11:01.839 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/25 12:18:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24748], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:01.839 request: 00:11:01.839 { 00:11:01.839 "method": "nvmf_create_subsystem", 00:11:01.839 "params": { 00:11:01.839 "nqn": "nqn.2016-06.io.spdk:cnode24748", 00:11:01.839 "max_cntlid": 0 00:11:01.839 } 00:11:01.839 } 00:11:01.839 Got JSON-RPC error response 00:11:01.839 GoRPCClient: error on JSON-RPC call' 00:11:01.839 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/25 12:18:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24748], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:01.839 request: 00:11:01.839 { 00:11:01.839 "method": "nvmf_create_subsystem", 00:11:01.839 "params": { 00:11:01.839 "nqn": "nqn.2016-06.io.spdk:cnode24748", 00:11:01.839 "max_cntlid": 0 00:11:01.839 } 00:11:01.839 } 00:11:01.839 Got JSON-RPC error response 00:11:01.839 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:01.839 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31345 -I 65520 00:11:01.839 [2024-07-25 12:18:16.628582] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31345: invalid cntlid range [1-65520] 00:11:01.839 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/25 12:18:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode31345], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:01.839 request: 00:11:01.839 { 00:11:01.839 "method": "nvmf_create_subsystem", 00:11:01.839 "params": { 00:11:01.839 "nqn": "nqn.2016-06.io.spdk:cnode31345", 00:11:01.839 "max_cntlid": 65520 00:11:01.839 } 00:11:01.839 } 00:11:01.839 Got JSON-RPC error response 00:11:01.839 GoRPCClient: error on JSON-RPC call' 00:11:01.839 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/25 12:18:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode31345], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:01.839 request: 00:11:01.839 { 00:11:01.839 "method": "nvmf_create_subsystem", 00:11:01.839 "params": { 00:11:01.839 "nqn": "nqn.2016-06.io.spdk:cnode31345", 00:11:01.839 "max_cntlid": 65520 00:11:01.839 } 00:11:01.839 } 00:11:01.839 Got JSON-RPC error response 00:11:01.839 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:01.839 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26678 -i 6 -I 5 00:11:02.098 [2024-07-25 12:18:16.886709] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26678: invalid cntlid range [6-5] 00:11:02.098 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/25 12:18:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode26678], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:02.098 request: 00:11:02.098 { 00:11:02.098 "method": "nvmf_create_subsystem", 00:11:02.098 "params": { 00:11:02.098 "nqn": "nqn.2016-06.io.spdk:cnode26678", 00:11:02.098 "min_cntlid": 6, 00:11:02.098 "max_cntlid": 5 00:11:02.098 } 00:11:02.098 } 00:11:02.098 Got JSON-RPC error response 00:11:02.098 GoRPCClient: error on JSON-RPC call' 00:11:02.098 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/25 12:18:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode26678], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:02.098 request: 00:11:02.098 { 00:11:02.098 "method": "nvmf_create_subsystem", 00:11:02.098 "params": { 00:11:02.098 "nqn": "nqn.2016-06.io.spdk:cnode26678", 00:11:02.098 "min_cntlid": 6, 00:11:02.098 "max_cntlid": 5 00:11:02.098 } 00:11:02.098 } 00:11:02.098 Got JSON-RPC error response 00:11:02.098 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:02.098 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:02.356 { 00:11:02.356 "name": "foobar", 00:11:02.356 "method": "nvmf_delete_target", 00:11:02.356 "req_id": 1 00:11:02.356 } 00:11:02.356 Got JSON-RPC error response 00:11:02.356 response: 00:11:02.356 { 00:11:02.356 "code": -32602, 00:11:02.356 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:02.356 }' 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:02.356 { 00:11:02.356 "name": "foobar", 00:11:02.356 "method": "nvmf_delete_target", 00:11:02.356 "req_id": 1 00:11:02.356 } 00:11:02.356 Got JSON-RPC error response 00:11:02.356 response: 00:11:02.356 { 00:11:02.356 "code": -32602, 00:11:02.356 "message": "The specified target doesn't exist, cannot delete it." 00:11:02.356 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.356 rmmod nvme_tcp 00:11:02.356 rmmod nvme_fabrics 00:11:02.356 rmmod nvme_keyring 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 75223 ']' 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 75223 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 75223 ']' 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 75223 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75223 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.356 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.356 killing process with pid 75223 00:11:02.357 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75223' 00:11:02.357 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 75223 00:11:02.357 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 75223 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:02.615 00:11:02.615 real 0m5.788s 00:11:02.615 user 0m23.014s 00:11:02.615 sys 0m1.276s 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:02.615 ************************************ 00:11:02.615 END TEST nvmf_invalid 00:11:02.615 ************************************ 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.615 ************************************ 00:11:02.615 START TEST nvmf_connect_stress 00:11:02.615 ************************************ 00:11:02.615 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:02.874 * Looking for test storage... 00:11:02.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:02.874 Cannot find device "nvmf_tgt_br" 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.874 Cannot find device "nvmf_tgt_br2" 00:11:02.874 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:02.875 Cannot find device "nvmf_tgt_br" 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:02.875 Cannot find device "nvmf_tgt_br2" 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:02.875 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:03.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:11:03.133 00:11:03.133 --- 10.0.0.2 ping statistics --- 00:11:03.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.133 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:03.133 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.133 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:11:03.133 00:11:03.133 --- 10.0.0.3 ping statistics --- 00:11:03.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.133 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:03.133 00:11:03.133 --- 10.0.0.1 ping statistics --- 00:11:03.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.133 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.133 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:03.134 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.134 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=75726 00:11:03.134 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:03.134 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 75726 00:11:03.134 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 75726 ']' 00:11:03.134 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.134 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.134 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.134 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.134 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.134 [2024-07-25 12:18:17.905489] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:11:03.134 [2024-07-25 12:18:17.905574] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.392 [2024-07-25 12:18:18.042686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.392 [2024-07-25 12:18:18.130929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.392 [2024-07-25 12:18:18.130986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.392 [2024-07-25 12:18:18.130997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.392 [2024-07-25 12:18:18.131005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.392 [2024-07-25 12:18:18.131012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.392 [2024-07-25 12:18:18.134387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.392 [2024-07-25 12:18:18.134452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.392 [2024-07-25 12:18:18.134461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.326 [2024-07-25 12:18:18.980656] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.326 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.326 [2024-07-25 12:18:18.998460] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.326 NULL1 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75779 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.326 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.585 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.585 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:04.585 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.585 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.585 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.151 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.151 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:05.151 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.151 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.151 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.409 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.409 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:05.409 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.409 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.409 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.668 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.668 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:05.668 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.668 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.668 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.995 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.995 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:05.995 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.995 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.995 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.252 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.253 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:06.253 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.253 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.253 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.510 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.510 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:06.510 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.510 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.510 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.077 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.077 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:07.077 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.077 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.077 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.335 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.335 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:07.335 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.335 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.335 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.592 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.592 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:07.592 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.592 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.592 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.850 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.850 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:07.850 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.850 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.850 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.107 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.107 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:08.107 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.107 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.107 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.670 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.670 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:08.670 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.670 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.670 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.932 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.932 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:08.932 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.932 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.932 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.204 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.204 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:09.204 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.204 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.204 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.462 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.462 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:09.462 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.462 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.462 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.720 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.720 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:09.720 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.720 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.720 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.285 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.285 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:10.285 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.285 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.285 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.542 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.542 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:10.542 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.542 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.542 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.800 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.800 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:10.800 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.800 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.800 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.057 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.057 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:11.057 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.057 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.057 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.622 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.622 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:11.622 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.622 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.622 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.880 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.880 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:11.880 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.880 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.880 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.137 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.137 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:12.137 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.137 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.137 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.395 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.395 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:12.395 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.395 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.395 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.653 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.653 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:12.653 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.653 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.653 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.217 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.217 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:13.217 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.217 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.217 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.474 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.474 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:13.474 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.474 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.474 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.731 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.731 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:13.731 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.731 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.731 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.988 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.988 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:13.988 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.988 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.988 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.245 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.245 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:14.245 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.245 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.245 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.503 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75779 00:11:14.761 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75779) - No such process 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75779 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.761 rmmod nvme_tcp 00:11:14.761 rmmod nvme_fabrics 00:11:14.761 rmmod nvme_keyring 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 75726 ']' 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 75726 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 75726 ']' 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 75726 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75726 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:14.761 killing process with pid 75726 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75726' 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 75726 00:11:14.761 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 75726 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:15.019 00:11:15.019 real 0m12.366s 00:11:15.019 user 0m41.428s 00:11:15.019 sys 0m3.250s 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.019 ************************************ 00:11:15.019 END TEST nvmf_connect_stress 00:11:15.019 ************************************ 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:15.019 ************************************ 00:11:15.019 START TEST nvmf_fused_ordering 00:11:15.019 ************************************ 00:11:15.019 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:15.277 * Looking for test storage... 00:11:15.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:15.277 Cannot find device "nvmf_tgt_br" 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:15.277 Cannot find device "nvmf_tgt_br2" 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:11:15.277 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:15.278 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:15.278 Cannot find device "nvmf_tgt_br" 00:11:15.278 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:11:15.278 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:15.278 Cannot find device "nvmf_tgt_br2" 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:15.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:15.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:15.278 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:15.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:11:15.535 00:11:15.535 --- 10.0.0.2 ping statistics --- 00:11:15.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.535 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:15.535 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:15.535 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:11:15.535 00:11:15.535 --- 10.0.0.3 ping statistics --- 00:11:15.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.535 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:15.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:15.535 00:11:15.535 --- 10.0.0.1 ping statistics --- 00:11:15.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.535 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:15.535 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=76101 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 76101 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 76101 ']' 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.536 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.536 [2024-07-25 12:18:30.347577] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:11:15.536 [2024-07-25 12:18:30.347690] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.792 [2024-07-25 12:18:30.484734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.792 [2024-07-25 12:18:30.601055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.792 [2024-07-25 12:18:30.601113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.792 [2024-07-25 12:18:30.601126] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.792 [2024-07-25 12:18:30.601134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.792 [2024-07-25 12:18:30.601142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.792 [2024-07-25 12:18:30.601171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.723 [2024-07-25 12:18:31.389785] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.723 [2024-07-25 12:18:31.405905] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.723 NULL1 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.723 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:16.723 [2024-07-25 12:18:31.456383] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:11:16.723 [2024-07-25 12:18:31.456430] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76155 ] 00:11:17.287 Attached to nqn.2016-06.io.spdk:cnode1 00:11:17.287 Namespace ID: 1 size: 1GB 00:11:17.287 fused_ordering(0) 00:11:17.287 fused_ordering(1) 00:11:17.287 fused_ordering(2) 00:11:17.287 fused_ordering(3) 00:11:17.287 fused_ordering(4) 00:11:17.287 fused_ordering(5) 00:11:17.287 fused_ordering(6) 00:11:17.287 fused_ordering(7) 00:11:17.287 fused_ordering(8) 00:11:17.287 fused_ordering(9) 00:11:17.287 fused_ordering(10) 00:11:17.287 fused_ordering(11) 00:11:17.287 fused_ordering(12) 00:11:17.287 fused_ordering(13) 00:11:17.287 fused_ordering(14) 00:11:17.287 fused_ordering(15) 00:11:17.287 fused_ordering(16) 00:11:17.287 fused_ordering(17) 00:11:17.287 fused_ordering(18) 00:11:17.287 fused_ordering(19) 00:11:17.287 fused_ordering(20) 00:11:17.287 fused_ordering(21) 00:11:17.287 fused_ordering(22) 00:11:17.287 fused_ordering(23) 00:11:17.287 fused_ordering(24) 00:11:17.287 fused_ordering(25) 00:11:17.287 fused_ordering(26) 00:11:17.287 fused_ordering(27) 00:11:17.287 fused_ordering(28) 00:11:17.287 fused_ordering(29) 00:11:17.287 fused_ordering(30) 00:11:17.287 fused_ordering(31) 00:11:17.287 fused_ordering(32) 00:11:17.287 fused_ordering(33) 00:11:17.287 fused_ordering(34) 00:11:17.287 fused_ordering(35) 00:11:17.287 fused_ordering(36) 00:11:17.287 fused_ordering(37) 00:11:17.287 fused_ordering(38) 00:11:17.287 fused_ordering(39) 00:11:17.287 fused_ordering(40) 00:11:17.287 fused_ordering(41) 00:11:17.287 fused_ordering(42) 00:11:17.287 fused_ordering(43) 00:11:17.287 fused_ordering(44) 00:11:17.287 fused_ordering(45) 00:11:17.287 fused_ordering(46) 00:11:17.287 fused_ordering(47) 00:11:17.287 fused_ordering(48) 00:11:17.287 fused_ordering(49) 00:11:17.287 fused_ordering(50) 00:11:17.287 fused_ordering(51) 00:11:17.287 fused_ordering(52) 00:11:17.287 fused_ordering(53) 00:11:17.287 fused_ordering(54) 00:11:17.287 fused_ordering(55) 00:11:17.287 fused_ordering(56) 00:11:17.287 fused_ordering(57) 00:11:17.287 fused_ordering(58) 00:11:17.287 fused_ordering(59) 00:11:17.287 fused_ordering(60) 00:11:17.287 fused_ordering(61) 00:11:17.287 fused_ordering(62) 00:11:17.287 fused_ordering(63) 00:11:17.287 fused_ordering(64) 00:11:17.287 fused_ordering(65) 00:11:17.287 fused_ordering(66) 00:11:17.287 fused_ordering(67) 00:11:17.287 fused_ordering(68) 00:11:17.287 fused_ordering(69) 00:11:17.287 fused_ordering(70) 00:11:17.287 fused_ordering(71) 00:11:17.287 fused_ordering(72) 00:11:17.287 fused_ordering(73) 00:11:17.287 fused_ordering(74) 00:11:17.287 fused_ordering(75) 00:11:17.287 fused_ordering(76) 00:11:17.287 fused_ordering(77) 00:11:17.287 fused_ordering(78) 00:11:17.287 fused_ordering(79) 00:11:17.287 fused_ordering(80) 00:11:17.287 fused_ordering(81) 00:11:17.287 fused_ordering(82) 00:11:17.287 fused_ordering(83) 00:11:17.287 fused_ordering(84) 00:11:17.287 fused_ordering(85) 00:11:17.287 fused_ordering(86) 00:11:17.287 fused_ordering(87) 00:11:17.287 fused_ordering(88) 00:11:17.287 fused_ordering(89) 00:11:17.287 fused_ordering(90) 00:11:17.287 fused_ordering(91) 00:11:17.287 fused_ordering(92) 00:11:17.287 fused_ordering(93) 00:11:17.287 fused_ordering(94) 00:11:17.287 fused_ordering(95) 00:11:17.287 fused_ordering(96) 00:11:17.287 fused_ordering(97) 00:11:17.287 fused_ordering(98) 00:11:17.287 fused_ordering(99) 00:11:17.287 fused_ordering(100) 00:11:17.287 fused_ordering(101) 00:11:17.287 fused_ordering(102) 00:11:17.287 fused_ordering(103) 00:11:17.287 fused_ordering(104) 00:11:17.287 fused_ordering(105) 00:11:17.287 fused_ordering(106) 00:11:17.287 fused_ordering(107) 00:11:17.287 fused_ordering(108) 00:11:17.287 fused_ordering(109) 00:11:17.287 fused_ordering(110) 00:11:17.287 fused_ordering(111) 00:11:17.287 fused_ordering(112) 00:11:17.287 fused_ordering(113) 00:11:17.287 fused_ordering(114) 00:11:17.287 fused_ordering(115) 00:11:17.287 fused_ordering(116) 00:11:17.287 fused_ordering(117) 00:11:17.287 fused_ordering(118) 00:11:17.287 fused_ordering(119) 00:11:17.287 fused_ordering(120) 00:11:17.287 fused_ordering(121) 00:11:17.287 fused_ordering(122) 00:11:17.287 fused_ordering(123) 00:11:17.287 fused_ordering(124) 00:11:17.287 fused_ordering(125) 00:11:17.287 fused_ordering(126) 00:11:17.287 fused_ordering(127) 00:11:17.287 fused_ordering(128) 00:11:17.287 fused_ordering(129) 00:11:17.287 fused_ordering(130) 00:11:17.287 fused_ordering(131) 00:11:17.287 fused_ordering(132) 00:11:17.287 fused_ordering(133) 00:11:17.287 fused_ordering(134) 00:11:17.287 fused_ordering(135) 00:11:17.287 fused_ordering(136) 00:11:17.287 fused_ordering(137) 00:11:17.287 fused_ordering(138) 00:11:17.287 fused_ordering(139) 00:11:17.287 fused_ordering(140) 00:11:17.287 fused_ordering(141) 00:11:17.287 fused_ordering(142) 00:11:17.287 fused_ordering(143) 00:11:17.287 fused_ordering(144) 00:11:17.287 fused_ordering(145) 00:11:17.287 fused_ordering(146) 00:11:17.287 fused_ordering(147) 00:11:17.287 fused_ordering(148) 00:11:17.287 fused_ordering(149) 00:11:17.287 fused_ordering(150) 00:11:17.287 fused_ordering(151) 00:11:17.287 fused_ordering(152) 00:11:17.287 fused_ordering(153) 00:11:17.287 fused_ordering(154) 00:11:17.287 fused_ordering(155) 00:11:17.287 fused_ordering(156) 00:11:17.287 fused_ordering(157) 00:11:17.287 fused_ordering(158) 00:11:17.287 fused_ordering(159) 00:11:17.287 fused_ordering(160) 00:11:17.287 fused_ordering(161) 00:11:17.287 fused_ordering(162) 00:11:17.287 fused_ordering(163) 00:11:17.287 fused_ordering(164) 00:11:17.287 fused_ordering(165) 00:11:17.287 fused_ordering(166) 00:11:17.287 fused_ordering(167) 00:11:17.288 fused_ordering(168) 00:11:17.288 fused_ordering(169) 00:11:17.288 fused_ordering(170) 00:11:17.288 fused_ordering(171) 00:11:17.288 fused_ordering(172) 00:11:17.288 fused_ordering(173) 00:11:17.288 fused_ordering(174) 00:11:17.288 fused_ordering(175) 00:11:17.288 fused_ordering(176) 00:11:17.288 fused_ordering(177) 00:11:17.288 fused_ordering(178) 00:11:17.288 fused_ordering(179) 00:11:17.288 fused_ordering(180) 00:11:17.288 fused_ordering(181) 00:11:17.288 fused_ordering(182) 00:11:17.288 fused_ordering(183) 00:11:17.288 fused_ordering(184) 00:11:17.288 fused_ordering(185) 00:11:17.288 fused_ordering(186) 00:11:17.288 fused_ordering(187) 00:11:17.288 fused_ordering(188) 00:11:17.288 fused_ordering(189) 00:11:17.288 fused_ordering(190) 00:11:17.288 fused_ordering(191) 00:11:17.288 fused_ordering(192) 00:11:17.288 fused_ordering(193) 00:11:17.288 fused_ordering(194) 00:11:17.288 fused_ordering(195) 00:11:17.288 fused_ordering(196) 00:11:17.288 fused_ordering(197) 00:11:17.288 fused_ordering(198) 00:11:17.288 fused_ordering(199) 00:11:17.288 fused_ordering(200) 00:11:17.288 fused_ordering(201) 00:11:17.288 fused_ordering(202) 00:11:17.288 fused_ordering(203) 00:11:17.288 fused_ordering(204) 00:11:17.288 fused_ordering(205) 00:11:17.545 fused_ordering(206) 00:11:17.545 fused_ordering(207) 00:11:17.545 fused_ordering(208) 00:11:17.545 fused_ordering(209) 00:11:17.545 fused_ordering(210) 00:11:17.545 fused_ordering(211) 00:11:17.545 fused_ordering(212) 00:11:17.545 fused_ordering(213) 00:11:17.545 fused_ordering(214) 00:11:17.545 fused_ordering(215) 00:11:17.545 fused_ordering(216) 00:11:17.545 fused_ordering(217) 00:11:17.545 fused_ordering(218) 00:11:17.545 fused_ordering(219) 00:11:17.545 fused_ordering(220) 00:11:17.545 fused_ordering(221) 00:11:17.545 fused_ordering(222) 00:11:17.545 fused_ordering(223) 00:11:17.545 fused_ordering(224) 00:11:17.545 fused_ordering(225) 00:11:17.545 fused_ordering(226) 00:11:17.545 fused_ordering(227) 00:11:17.545 fused_ordering(228) 00:11:17.545 fused_ordering(229) 00:11:17.545 fused_ordering(230) 00:11:17.545 fused_ordering(231) 00:11:17.545 fused_ordering(232) 00:11:17.545 fused_ordering(233) 00:11:17.545 fused_ordering(234) 00:11:17.545 fused_ordering(235) 00:11:17.545 fused_ordering(236) 00:11:17.545 fused_ordering(237) 00:11:17.545 fused_ordering(238) 00:11:17.545 fused_ordering(239) 00:11:17.545 fused_ordering(240) 00:11:17.545 fused_ordering(241) 00:11:17.545 fused_ordering(242) 00:11:17.545 fused_ordering(243) 00:11:17.545 fused_ordering(244) 00:11:17.545 fused_ordering(245) 00:11:17.545 fused_ordering(246) 00:11:17.545 fused_ordering(247) 00:11:17.545 fused_ordering(248) 00:11:17.545 fused_ordering(249) 00:11:17.545 fused_ordering(250) 00:11:17.545 fused_ordering(251) 00:11:17.545 fused_ordering(252) 00:11:17.545 fused_ordering(253) 00:11:17.545 fused_ordering(254) 00:11:17.545 fused_ordering(255) 00:11:17.545 fused_ordering(256) 00:11:17.545 fused_ordering(257) 00:11:17.546 fused_ordering(258) 00:11:17.546 fused_ordering(259) 00:11:17.546 fused_ordering(260) 00:11:17.546 fused_ordering(261) 00:11:17.546 fused_ordering(262) 00:11:17.546 fused_ordering(263) 00:11:17.546 fused_ordering(264) 00:11:17.546 fused_ordering(265) 00:11:17.546 fused_ordering(266) 00:11:17.546 fused_ordering(267) 00:11:17.546 fused_ordering(268) 00:11:17.546 fused_ordering(269) 00:11:17.546 fused_ordering(270) 00:11:17.546 fused_ordering(271) 00:11:17.546 fused_ordering(272) 00:11:17.546 fused_ordering(273) 00:11:17.546 fused_ordering(274) 00:11:17.546 fused_ordering(275) 00:11:17.546 fused_ordering(276) 00:11:17.546 fused_ordering(277) 00:11:17.546 fused_ordering(278) 00:11:17.546 fused_ordering(279) 00:11:17.546 fused_ordering(280) 00:11:17.546 fused_ordering(281) 00:11:17.546 fused_ordering(282) 00:11:17.546 fused_ordering(283) 00:11:17.546 fused_ordering(284) 00:11:17.546 fused_ordering(285) 00:11:17.546 fused_ordering(286) 00:11:17.546 fused_ordering(287) 00:11:17.546 fused_ordering(288) 00:11:17.546 fused_ordering(289) 00:11:17.546 fused_ordering(290) 00:11:17.546 fused_ordering(291) 00:11:17.546 fused_ordering(292) 00:11:17.546 fused_ordering(293) 00:11:17.546 fused_ordering(294) 00:11:17.546 fused_ordering(295) 00:11:17.546 fused_ordering(296) 00:11:17.546 fused_ordering(297) 00:11:17.546 fused_ordering(298) 00:11:17.546 fused_ordering(299) 00:11:17.546 fused_ordering(300) 00:11:17.546 fused_ordering(301) 00:11:17.546 fused_ordering(302) 00:11:17.546 fused_ordering(303) 00:11:17.546 fused_ordering(304) 00:11:17.546 fused_ordering(305) 00:11:17.546 fused_ordering(306) 00:11:17.546 fused_ordering(307) 00:11:17.546 fused_ordering(308) 00:11:17.546 fused_ordering(309) 00:11:17.546 fused_ordering(310) 00:11:17.546 fused_ordering(311) 00:11:17.546 fused_ordering(312) 00:11:17.546 fused_ordering(313) 00:11:17.546 fused_ordering(314) 00:11:17.546 fused_ordering(315) 00:11:17.546 fused_ordering(316) 00:11:17.546 fused_ordering(317) 00:11:17.546 fused_ordering(318) 00:11:17.546 fused_ordering(319) 00:11:17.546 fused_ordering(320) 00:11:17.546 fused_ordering(321) 00:11:17.546 fused_ordering(322) 00:11:17.546 fused_ordering(323) 00:11:17.546 fused_ordering(324) 00:11:17.546 fused_ordering(325) 00:11:17.546 fused_ordering(326) 00:11:17.546 fused_ordering(327) 00:11:17.546 fused_ordering(328) 00:11:17.546 fused_ordering(329) 00:11:17.546 fused_ordering(330) 00:11:17.546 fused_ordering(331) 00:11:17.546 fused_ordering(332) 00:11:17.546 fused_ordering(333) 00:11:17.546 fused_ordering(334) 00:11:17.546 fused_ordering(335) 00:11:17.546 fused_ordering(336) 00:11:17.546 fused_ordering(337) 00:11:17.546 fused_ordering(338) 00:11:17.546 fused_ordering(339) 00:11:17.546 fused_ordering(340) 00:11:17.546 fused_ordering(341) 00:11:17.546 fused_ordering(342) 00:11:17.546 fused_ordering(343) 00:11:17.546 fused_ordering(344) 00:11:17.546 fused_ordering(345) 00:11:17.546 fused_ordering(346) 00:11:17.546 fused_ordering(347) 00:11:17.546 fused_ordering(348) 00:11:17.546 fused_ordering(349) 00:11:17.546 fused_ordering(350) 00:11:17.546 fused_ordering(351) 00:11:17.546 fused_ordering(352) 00:11:17.546 fused_ordering(353) 00:11:17.546 fused_ordering(354) 00:11:17.546 fused_ordering(355) 00:11:17.546 fused_ordering(356) 00:11:17.546 fused_ordering(357) 00:11:17.546 fused_ordering(358) 00:11:17.546 fused_ordering(359) 00:11:17.546 fused_ordering(360) 00:11:17.546 fused_ordering(361) 00:11:17.546 fused_ordering(362) 00:11:17.546 fused_ordering(363) 00:11:17.546 fused_ordering(364) 00:11:17.546 fused_ordering(365) 00:11:17.546 fused_ordering(366) 00:11:17.546 fused_ordering(367) 00:11:17.546 fused_ordering(368) 00:11:17.546 fused_ordering(369) 00:11:17.546 fused_ordering(370) 00:11:17.546 fused_ordering(371) 00:11:17.546 fused_ordering(372) 00:11:17.546 fused_ordering(373) 00:11:17.546 fused_ordering(374) 00:11:17.546 fused_ordering(375) 00:11:17.546 fused_ordering(376) 00:11:17.546 fused_ordering(377) 00:11:17.546 fused_ordering(378) 00:11:17.546 fused_ordering(379) 00:11:17.546 fused_ordering(380) 00:11:17.546 fused_ordering(381) 00:11:17.546 fused_ordering(382) 00:11:17.546 fused_ordering(383) 00:11:17.546 fused_ordering(384) 00:11:17.546 fused_ordering(385) 00:11:17.546 fused_ordering(386) 00:11:17.546 fused_ordering(387) 00:11:17.546 fused_ordering(388) 00:11:17.546 fused_ordering(389) 00:11:17.546 fused_ordering(390) 00:11:17.546 fused_ordering(391) 00:11:17.546 fused_ordering(392) 00:11:17.546 fused_ordering(393) 00:11:17.546 fused_ordering(394) 00:11:17.546 fused_ordering(395) 00:11:17.546 fused_ordering(396) 00:11:17.546 fused_ordering(397) 00:11:17.546 fused_ordering(398) 00:11:17.546 fused_ordering(399) 00:11:17.546 fused_ordering(400) 00:11:17.546 fused_ordering(401) 00:11:17.546 fused_ordering(402) 00:11:17.546 fused_ordering(403) 00:11:17.546 fused_ordering(404) 00:11:17.546 fused_ordering(405) 00:11:17.546 fused_ordering(406) 00:11:17.546 fused_ordering(407) 00:11:17.546 fused_ordering(408) 00:11:17.546 fused_ordering(409) 00:11:17.546 fused_ordering(410) 00:11:17.803 fused_ordering(411) 00:11:17.803 fused_ordering(412) 00:11:17.803 fused_ordering(413) 00:11:17.803 fused_ordering(414) 00:11:17.803 fused_ordering(415) 00:11:17.803 fused_ordering(416) 00:11:17.803 fused_ordering(417) 00:11:17.803 fused_ordering(418) 00:11:17.803 fused_ordering(419) 00:11:17.803 fused_ordering(420) 00:11:17.803 fused_ordering(421) 00:11:17.803 fused_ordering(422) 00:11:17.803 fused_ordering(423) 00:11:17.803 fused_ordering(424) 00:11:17.803 fused_ordering(425) 00:11:17.803 fused_ordering(426) 00:11:17.803 fused_ordering(427) 00:11:17.803 fused_ordering(428) 00:11:17.803 fused_ordering(429) 00:11:17.803 fused_ordering(430) 00:11:17.803 fused_ordering(431) 00:11:17.803 fused_ordering(432) 00:11:17.803 fused_ordering(433) 00:11:17.803 fused_ordering(434) 00:11:17.803 fused_ordering(435) 00:11:17.803 fused_ordering(436) 00:11:17.803 fused_ordering(437) 00:11:17.803 fused_ordering(438) 00:11:17.803 fused_ordering(439) 00:11:17.803 fused_ordering(440) 00:11:17.803 fused_ordering(441) 00:11:17.803 fused_ordering(442) 00:11:17.803 fused_ordering(443) 00:11:17.803 fused_ordering(444) 00:11:17.803 fused_ordering(445) 00:11:17.803 fused_ordering(446) 00:11:17.803 fused_ordering(447) 00:11:17.803 fused_ordering(448) 00:11:17.803 fused_ordering(449) 00:11:17.803 fused_ordering(450) 00:11:17.804 fused_ordering(451) 00:11:17.804 fused_ordering(452) 00:11:17.804 fused_ordering(453) 00:11:17.804 fused_ordering(454) 00:11:17.804 fused_ordering(455) 00:11:17.804 fused_ordering(456) 00:11:17.804 fused_ordering(457) 00:11:17.804 fused_ordering(458) 00:11:17.804 fused_ordering(459) 00:11:17.804 fused_ordering(460) 00:11:17.804 fused_ordering(461) 00:11:17.804 fused_ordering(462) 00:11:17.804 fused_ordering(463) 00:11:17.804 fused_ordering(464) 00:11:17.804 fused_ordering(465) 00:11:17.804 fused_ordering(466) 00:11:17.804 fused_ordering(467) 00:11:17.804 fused_ordering(468) 00:11:17.804 fused_ordering(469) 00:11:17.804 fused_ordering(470) 00:11:17.804 fused_ordering(471) 00:11:17.804 fused_ordering(472) 00:11:17.804 fused_ordering(473) 00:11:17.804 fused_ordering(474) 00:11:17.804 fused_ordering(475) 00:11:17.804 fused_ordering(476) 00:11:17.804 fused_ordering(477) 00:11:17.804 fused_ordering(478) 00:11:17.804 fused_ordering(479) 00:11:17.804 fused_ordering(480) 00:11:17.804 fused_ordering(481) 00:11:17.804 fused_ordering(482) 00:11:17.804 fused_ordering(483) 00:11:17.804 fused_ordering(484) 00:11:17.804 fused_ordering(485) 00:11:17.804 fused_ordering(486) 00:11:17.804 fused_ordering(487) 00:11:17.804 fused_ordering(488) 00:11:17.804 fused_ordering(489) 00:11:17.804 fused_ordering(490) 00:11:17.804 fused_ordering(491) 00:11:17.804 fused_ordering(492) 00:11:17.804 fused_ordering(493) 00:11:17.804 fused_ordering(494) 00:11:17.804 fused_ordering(495) 00:11:17.804 fused_ordering(496) 00:11:17.804 fused_ordering(497) 00:11:17.804 fused_ordering(498) 00:11:17.804 fused_ordering(499) 00:11:17.804 fused_ordering(500) 00:11:17.804 fused_ordering(501) 00:11:17.804 fused_ordering(502) 00:11:17.804 fused_ordering(503) 00:11:17.804 fused_ordering(504) 00:11:17.804 fused_ordering(505) 00:11:17.804 fused_ordering(506) 00:11:17.804 fused_ordering(507) 00:11:17.804 fused_ordering(508) 00:11:17.804 fused_ordering(509) 00:11:17.804 fused_ordering(510) 00:11:17.804 fused_ordering(511) 00:11:17.804 fused_ordering(512) 00:11:17.804 fused_ordering(513) 00:11:17.804 fused_ordering(514) 00:11:17.804 fused_ordering(515) 00:11:17.804 fused_ordering(516) 00:11:17.804 fused_ordering(517) 00:11:17.804 fused_ordering(518) 00:11:17.804 fused_ordering(519) 00:11:17.804 fused_ordering(520) 00:11:17.804 fused_ordering(521) 00:11:17.804 fused_ordering(522) 00:11:17.804 fused_ordering(523) 00:11:17.804 fused_ordering(524) 00:11:17.804 fused_ordering(525) 00:11:17.804 fused_ordering(526) 00:11:17.804 fused_ordering(527) 00:11:17.804 fused_ordering(528) 00:11:17.804 fused_ordering(529) 00:11:17.804 fused_ordering(530) 00:11:17.804 fused_ordering(531) 00:11:17.804 fused_ordering(532) 00:11:17.804 fused_ordering(533) 00:11:17.804 fused_ordering(534) 00:11:17.804 fused_ordering(535) 00:11:17.804 fused_ordering(536) 00:11:17.804 fused_ordering(537) 00:11:17.804 fused_ordering(538) 00:11:17.804 fused_ordering(539) 00:11:17.804 fused_ordering(540) 00:11:17.804 fused_ordering(541) 00:11:17.804 fused_ordering(542) 00:11:17.804 fused_ordering(543) 00:11:17.804 fused_ordering(544) 00:11:17.804 fused_ordering(545) 00:11:17.804 fused_ordering(546) 00:11:17.804 fused_ordering(547) 00:11:17.804 fused_ordering(548) 00:11:17.804 fused_ordering(549) 00:11:17.804 fused_ordering(550) 00:11:17.804 fused_ordering(551) 00:11:17.804 fused_ordering(552) 00:11:17.804 fused_ordering(553) 00:11:17.804 fused_ordering(554) 00:11:17.804 fused_ordering(555) 00:11:17.804 fused_ordering(556) 00:11:17.804 fused_ordering(557) 00:11:17.804 fused_ordering(558) 00:11:17.804 fused_ordering(559) 00:11:17.804 fused_ordering(560) 00:11:17.804 fused_ordering(561) 00:11:17.804 fused_ordering(562) 00:11:17.804 fused_ordering(563) 00:11:17.804 fused_ordering(564) 00:11:17.804 fused_ordering(565) 00:11:17.804 fused_ordering(566) 00:11:17.804 fused_ordering(567) 00:11:17.804 fused_ordering(568) 00:11:17.804 fused_ordering(569) 00:11:17.804 fused_ordering(570) 00:11:17.804 fused_ordering(571) 00:11:17.804 fused_ordering(572) 00:11:17.804 fused_ordering(573) 00:11:17.804 fused_ordering(574) 00:11:17.804 fused_ordering(575) 00:11:17.804 fused_ordering(576) 00:11:17.804 fused_ordering(577) 00:11:17.804 fused_ordering(578) 00:11:17.804 fused_ordering(579) 00:11:17.804 fused_ordering(580) 00:11:17.804 fused_ordering(581) 00:11:17.804 fused_ordering(582) 00:11:17.804 fused_ordering(583) 00:11:17.804 fused_ordering(584) 00:11:17.804 fused_ordering(585) 00:11:17.804 fused_ordering(586) 00:11:17.804 fused_ordering(587) 00:11:17.804 fused_ordering(588) 00:11:17.804 fused_ordering(589) 00:11:17.804 fused_ordering(590) 00:11:17.804 fused_ordering(591) 00:11:17.804 fused_ordering(592) 00:11:17.804 fused_ordering(593) 00:11:17.804 fused_ordering(594) 00:11:17.804 fused_ordering(595) 00:11:17.804 fused_ordering(596) 00:11:17.804 fused_ordering(597) 00:11:17.804 fused_ordering(598) 00:11:17.804 fused_ordering(599) 00:11:17.804 fused_ordering(600) 00:11:17.804 fused_ordering(601) 00:11:17.804 fused_ordering(602) 00:11:17.804 fused_ordering(603) 00:11:17.804 fused_ordering(604) 00:11:17.804 fused_ordering(605) 00:11:17.804 fused_ordering(606) 00:11:17.804 fused_ordering(607) 00:11:17.804 fused_ordering(608) 00:11:17.804 fused_ordering(609) 00:11:17.804 fused_ordering(610) 00:11:17.804 fused_ordering(611) 00:11:17.804 fused_ordering(612) 00:11:17.804 fused_ordering(613) 00:11:17.804 fused_ordering(614) 00:11:17.804 fused_ordering(615) 00:11:18.369 fused_ordering(616) 00:11:18.369 fused_ordering(617) 00:11:18.369 fused_ordering(618) 00:11:18.369 fused_ordering(619) 00:11:18.369 fused_ordering(620) 00:11:18.369 fused_ordering(621) 00:11:18.369 fused_ordering(622) 00:11:18.369 fused_ordering(623) 00:11:18.369 fused_ordering(624) 00:11:18.369 fused_ordering(625) 00:11:18.369 fused_ordering(626) 00:11:18.369 fused_ordering(627) 00:11:18.369 fused_ordering(628) 00:11:18.369 fused_ordering(629) 00:11:18.369 fused_ordering(630) 00:11:18.369 fused_ordering(631) 00:11:18.369 fused_ordering(632) 00:11:18.369 fused_ordering(633) 00:11:18.369 fused_ordering(634) 00:11:18.369 fused_ordering(635) 00:11:18.369 fused_ordering(636) 00:11:18.369 fused_ordering(637) 00:11:18.369 fused_ordering(638) 00:11:18.369 fused_ordering(639) 00:11:18.369 fused_ordering(640) 00:11:18.369 fused_ordering(641) 00:11:18.369 fused_ordering(642) 00:11:18.369 fused_ordering(643) 00:11:18.369 fused_ordering(644) 00:11:18.369 fused_ordering(645) 00:11:18.369 fused_ordering(646) 00:11:18.369 fused_ordering(647) 00:11:18.369 fused_ordering(648) 00:11:18.369 fused_ordering(649) 00:11:18.369 fused_ordering(650) 00:11:18.369 fused_ordering(651) 00:11:18.369 fused_ordering(652) 00:11:18.369 fused_ordering(653) 00:11:18.369 fused_ordering(654) 00:11:18.369 fused_ordering(655) 00:11:18.369 fused_ordering(656) 00:11:18.369 fused_ordering(657) 00:11:18.369 fused_ordering(658) 00:11:18.369 fused_ordering(659) 00:11:18.369 fused_ordering(660) 00:11:18.369 fused_ordering(661) 00:11:18.369 fused_ordering(662) 00:11:18.369 fused_ordering(663) 00:11:18.369 fused_ordering(664) 00:11:18.369 fused_ordering(665) 00:11:18.369 fused_ordering(666) 00:11:18.369 fused_ordering(667) 00:11:18.369 fused_ordering(668) 00:11:18.369 fused_ordering(669) 00:11:18.369 fused_ordering(670) 00:11:18.369 fused_ordering(671) 00:11:18.369 fused_ordering(672) 00:11:18.369 fused_ordering(673) 00:11:18.369 fused_ordering(674) 00:11:18.369 fused_ordering(675) 00:11:18.369 fused_ordering(676) 00:11:18.369 fused_ordering(677) 00:11:18.369 fused_ordering(678) 00:11:18.369 fused_ordering(679) 00:11:18.369 fused_ordering(680) 00:11:18.369 fused_ordering(681) 00:11:18.369 fused_ordering(682) 00:11:18.369 fused_ordering(683) 00:11:18.369 fused_ordering(684) 00:11:18.369 fused_ordering(685) 00:11:18.369 fused_ordering(686) 00:11:18.369 fused_ordering(687) 00:11:18.369 fused_ordering(688) 00:11:18.369 fused_ordering(689) 00:11:18.369 fused_ordering(690) 00:11:18.369 fused_ordering(691) 00:11:18.369 fused_ordering(692) 00:11:18.369 fused_ordering(693) 00:11:18.369 fused_ordering(694) 00:11:18.369 fused_ordering(695) 00:11:18.369 fused_ordering(696) 00:11:18.369 fused_ordering(697) 00:11:18.369 fused_ordering(698) 00:11:18.369 fused_ordering(699) 00:11:18.369 fused_ordering(700) 00:11:18.369 fused_ordering(701) 00:11:18.369 fused_ordering(702) 00:11:18.369 fused_ordering(703) 00:11:18.369 fused_ordering(704) 00:11:18.369 fused_ordering(705) 00:11:18.369 fused_ordering(706) 00:11:18.369 fused_ordering(707) 00:11:18.369 fused_ordering(708) 00:11:18.369 fused_ordering(709) 00:11:18.369 fused_ordering(710) 00:11:18.369 fused_ordering(711) 00:11:18.369 fused_ordering(712) 00:11:18.369 fused_ordering(713) 00:11:18.369 fused_ordering(714) 00:11:18.369 fused_ordering(715) 00:11:18.369 fused_ordering(716) 00:11:18.369 fused_ordering(717) 00:11:18.369 fused_ordering(718) 00:11:18.369 fused_ordering(719) 00:11:18.369 fused_ordering(720) 00:11:18.369 fused_ordering(721) 00:11:18.369 fused_ordering(722) 00:11:18.369 fused_ordering(723) 00:11:18.369 fused_ordering(724) 00:11:18.369 fused_ordering(725) 00:11:18.369 fused_ordering(726) 00:11:18.369 fused_ordering(727) 00:11:18.369 fused_ordering(728) 00:11:18.369 fused_ordering(729) 00:11:18.369 fused_ordering(730) 00:11:18.369 fused_ordering(731) 00:11:18.369 fused_ordering(732) 00:11:18.369 fused_ordering(733) 00:11:18.369 fused_ordering(734) 00:11:18.369 fused_ordering(735) 00:11:18.369 fused_ordering(736) 00:11:18.369 fused_ordering(737) 00:11:18.369 fused_ordering(738) 00:11:18.369 fused_ordering(739) 00:11:18.369 fused_ordering(740) 00:11:18.369 fused_ordering(741) 00:11:18.369 fused_ordering(742) 00:11:18.369 fused_ordering(743) 00:11:18.369 fused_ordering(744) 00:11:18.369 fused_ordering(745) 00:11:18.369 fused_ordering(746) 00:11:18.369 fused_ordering(747) 00:11:18.369 fused_ordering(748) 00:11:18.369 fused_ordering(749) 00:11:18.369 fused_ordering(750) 00:11:18.369 fused_ordering(751) 00:11:18.369 fused_ordering(752) 00:11:18.369 fused_ordering(753) 00:11:18.369 fused_ordering(754) 00:11:18.369 fused_ordering(755) 00:11:18.369 fused_ordering(756) 00:11:18.369 fused_ordering(757) 00:11:18.369 fused_ordering(758) 00:11:18.369 fused_ordering(759) 00:11:18.369 fused_ordering(760) 00:11:18.369 fused_ordering(761) 00:11:18.369 fused_ordering(762) 00:11:18.369 fused_ordering(763) 00:11:18.369 fused_ordering(764) 00:11:18.369 fused_ordering(765) 00:11:18.369 fused_ordering(766) 00:11:18.369 fused_ordering(767) 00:11:18.369 fused_ordering(768) 00:11:18.369 fused_ordering(769) 00:11:18.369 fused_ordering(770) 00:11:18.369 fused_ordering(771) 00:11:18.369 fused_ordering(772) 00:11:18.369 fused_ordering(773) 00:11:18.369 fused_ordering(774) 00:11:18.369 fused_ordering(775) 00:11:18.369 fused_ordering(776) 00:11:18.369 fused_ordering(777) 00:11:18.369 fused_ordering(778) 00:11:18.369 fused_ordering(779) 00:11:18.369 fused_ordering(780) 00:11:18.369 fused_ordering(781) 00:11:18.369 fused_ordering(782) 00:11:18.369 fused_ordering(783) 00:11:18.369 fused_ordering(784) 00:11:18.369 fused_ordering(785) 00:11:18.369 fused_ordering(786) 00:11:18.369 fused_ordering(787) 00:11:18.369 fused_ordering(788) 00:11:18.369 fused_ordering(789) 00:11:18.369 fused_ordering(790) 00:11:18.369 fused_ordering(791) 00:11:18.369 fused_ordering(792) 00:11:18.369 fused_ordering(793) 00:11:18.369 fused_ordering(794) 00:11:18.370 fused_ordering(795) 00:11:18.370 fused_ordering(796) 00:11:18.370 fused_ordering(797) 00:11:18.370 fused_ordering(798) 00:11:18.370 fused_ordering(799) 00:11:18.370 fused_ordering(800) 00:11:18.370 fused_ordering(801) 00:11:18.370 fused_ordering(802) 00:11:18.370 fused_ordering(803) 00:11:18.370 fused_ordering(804) 00:11:18.370 fused_ordering(805) 00:11:18.370 fused_ordering(806) 00:11:18.370 fused_ordering(807) 00:11:18.370 fused_ordering(808) 00:11:18.370 fused_ordering(809) 00:11:18.370 fused_ordering(810) 00:11:18.370 fused_ordering(811) 00:11:18.370 fused_ordering(812) 00:11:18.370 fused_ordering(813) 00:11:18.370 fused_ordering(814) 00:11:18.370 fused_ordering(815) 00:11:18.370 fused_ordering(816) 00:11:18.370 fused_ordering(817) 00:11:18.370 fused_ordering(818) 00:11:18.370 fused_ordering(819) 00:11:18.370 fused_ordering(820) 00:11:18.947 fused_ordering(821) 00:11:18.947 fused_ordering(822) 00:11:18.947 fused_ordering(823) 00:11:18.947 fused_ordering(824) 00:11:18.947 fused_ordering(825) 00:11:18.947 fused_ordering(826) 00:11:18.947 fused_ordering(827) 00:11:18.947 fused_ordering(828) 00:11:18.947 fused_ordering(829) 00:11:18.947 fused_ordering(830) 00:11:18.947 fused_ordering(831) 00:11:18.947 fused_ordering(832) 00:11:18.947 fused_ordering(833) 00:11:18.947 fused_ordering(834) 00:11:18.947 fused_ordering(835) 00:11:18.947 fused_ordering(836) 00:11:18.947 fused_ordering(837) 00:11:18.947 fused_ordering(838) 00:11:18.947 fused_ordering(839) 00:11:18.947 fused_ordering(840) 00:11:18.947 fused_ordering(841) 00:11:18.947 fused_ordering(842) 00:11:18.947 fused_ordering(843) 00:11:18.947 fused_ordering(844) 00:11:18.947 fused_ordering(845) 00:11:18.947 fused_ordering(846) 00:11:18.947 fused_ordering(847) 00:11:18.947 fused_ordering(848) 00:11:18.947 fused_ordering(849) 00:11:18.947 fused_ordering(850) 00:11:18.947 fused_ordering(851) 00:11:18.947 fused_ordering(852) 00:11:18.947 fused_ordering(853) 00:11:18.947 fused_ordering(854) 00:11:18.947 fused_ordering(855) 00:11:18.947 fused_ordering(856) 00:11:18.947 fused_ordering(857) 00:11:18.947 fused_ordering(858) 00:11:18.947 fused_ordering(859) 00:11:18.947 fused_ordering(860) 00:11:18.947 fused_ordering(861) 00:11:18.947 fused_ordering(862) 00:11:18.947 fused_ordering(863) 00:11:18.947 fused_ordering(864) 00:11:18.947 fused_ordering(865) 00:11:18.947 fused_ordering(866) 00:11:18.947 fused_ordering(867) 00:11:18.947 fused_ordering(868) 00:11:18.947 fused_ordering(869) 00:11:18.947 fused_ordering(870) 00:11:18.947 fused_ordering(871) 00:11:18.947 fused_ordering(872) 00:11:18.947 fused_ordering(873) 00:11:18.947 fused_ordering(874) 00:11:18.947 fused_ordering(875) 00:11:18.947 fused_ordering(876) 00:11:18.947 fused_ordering(877) 00:11:18.947 fused_ordering(878) 00:11:18.947 fused_ordering(879) 00:11:18.947 fused_ordering(880) 00:11:18.947 fused_ordering(881) 00:11:18.947 fused_ordering(882) 00:11:18.947 fused_ordering(883) 00:11:18.947 fused_ordering(884) 00:11:18.947 fused_ordering(885) 00:11:18.947 fused_ordering(886) 00:11:18.947 fused_ordering(887) 00:11:18.947 fused_ordering(888) 00:11:18.947 fused_ordering(889) 00:11:18.947 fused_ordering(890) 00:11:18.947 fused_ordering(891) 00:11:18.947 fused_ordering(892) 00:11:18.947 fused_ordering(893) 00:11:18.947 fused_ordering(894) 00:11:18.947 fused_ordering(895) 00:11:18.947 fused_ordering(896) 00:11:18.947 fused_ordering(897) 00:11:18.947 fused_ordering(898) 00:11:18.947 fused_ordering(899) 00:11:18.947 fused_ordering(900) 00:11:18.947 fused_ordering(901) 00:11:18.947 fused_ordering(902) 00:11:18.947 fused_ordering(903) 00:11:18.947 fused_ordering(904) 00:11:18.947 fused_ordering(905) 00:11:18.947 fused_ordering(906) 00:11:18.947 fused_ordering(907) 00:11:18.947 fused_ordering(908) 00:11:18.947 fused_ordering(909) 00:11:18.947 fused_ordering(910) 00:11:18.947 fused_ordering(911) 00:11:18.947 fused_ordering(912) 00:11:18.947 fused_ordering(913) 00:11:18.947 fused_ordering(914) 00:11:18.947 fused_ordering(915) 00:11:18.947 fused_ordering(916) 00:11:18.947 fused_ordering(917) 00:11:18.947 fused_ordering(918) 00:11:18.948 fused_ordering(919) 00:11:18.948 fused_ordering(920) 00:11:18.948 fused_ordering(921) 00:11:18.948 fused_ordering(922) 00:11:18.948 fused_ordering(923) 00:11:18.948 fused_ordering(924) 00:11:18.948 fused_ordering(925) 00:11:18.948 fused_ordering(926) 00:11:18.948 fused_ordering(927) 00:11:18.948 fused_ordering(928) 00:11:18.948 fused_ordering(929) 00:11:18.948 fused_ordering(930) 00:11:18.948 fused_ordering(931) 00:11:18.948 fused_ordering(932) 00:11:18.948 fused_ordering(933) 00:11:18.948 fused_ordering(934) 00:11:18.948 fused_ordering(935) 00:11:18.948 fused_ordering(936) 00:11:18.948 fused_ordering(937) 00:11:18.948 fused_ordering(938) 00:11:18.948 fused_ordering(939) 00:11:18.948 fused_ordering(940) 00:11:18.948 fused_ordering(941) 00:11:18.948 fused_ordering(942) 00:11:18.948 fused_ordering(943) 00:11:18.948 fused_ordering(944) 00:11:18.948 fused_ordering(945) 00:11:18.948 fused_ordering(946) 00:11:18.948 fused_ordering(947) 00:11:18.948 fused_ordering(948) 00:11:18.948 fused_ordering(949) 00:11:18.948 fused_ordering(950) 00:11:18.948 fused_ordering(951) 00:11:18.948 fused_ordering(952) 00:11:18.948 fused_ordering(953) 00:11:18.948 fused_ordering(954) 00:11:18.948 fused_ordering(955) 00:11:18.948 fused_ordering(956) 00:11:18.948 fused_ordering(957) 00:11:18.948 fused_ordering(958) 00:11:18.948 fused_ordering(959) 00:11:18.948 fused_ordering(960) 00:11:18.948 fused_ordering(961) 00:11:18.948 fused_ordering(962) 00:11:18.948 fused_ordering(963) 00:11:18.948 fused_ordering(964) 00:11:18.948 fused_ordering(965) 00:11:18.948 fused_ordering(966) 00:11:18.948 fused_ordering(967) 00:11:18.948 fused_ordering(968) 00:11:18.948 fused_ordering(969) 00:11:18.948 fused_ordering(970) 00:11:18.948 fused_ordering(971) 00:11:18.948 fused_ordering(972) 00:11:18.948 fused_ordering(973) 00:11:18.948 fused_ordering(974) 00:11:18.948 fused_ordering(975) 00:11:18.948 fused_ordering(976) 00:11:18.948 fused_ordering(977) 00:11:18.948 fused_ordering(978) 00:11:18.948 fused_ordering(979) 00:11:18.948 fused_ordering(980) 00:11:18.948 fused_ordering(981) 00:11:18.948 fused_ordering(982) 00:11:18.948 fused_ordering(983) 00:11:18.948 fused_ordering(984) 00:11:18.948 fused_ordering(985) 00:11:18.948 fused_ordering(986) 00:11:18.948 fused_ordering(987) 00:11:18.948 fused_ordering(988) 00:11:18.948 fused_ordering(989) 00:11:18.948 fused_ordering(990) 00:11:18.948 fused_ordering(991) 00:11:18.948 fused_ordering(992) 00:11:18.948 fused_ordering(993) 00:11:18.948 fused_ordering(994) 00:11:18.948 fused_ordering(995) 00:11:18.948 fused_ordering(996) 00:11:18.948 fused_ordering(997) 00:11:18.948 fused_ordering(998) 00:11:18.948 fused_ordering(999) 00:11:18.948 fused_ordering(1000) 00:11:18.948 fused_ordering(1001) 00:11:18.948 fused_ordering(1002) 00:11:18.948 fused_ordering(1003) 00:11:18.948 fused_ordering(1004) 00:11:18.948 fused_ordering(1005) 00:11:18.948 fused_ordering(1006) 00:11:18.948 fused_ordering(1007) 00:11:18.948 fused_ordering(1008) 00:11:18.948 fused_ordering(1009) 00:11:18.948 fused_ordering(1010) 00:11:18.948 fused_ordering(1011) 00:11:18.948 fused_ordering(1012) 00:11:18.948 fused_ordering(1013) 00:11:18.948 fused_ordering(1014) 00:11:18.948 fused_ordering(1015) 00:11:18.948 fused_ordering(1016) 00:11:18.948 fused_ordering(1017) 00:11:18.948 fused_ordering(1018) 00:11:18.948 fused_ordering(1019) 00:11:18.948 fused_ordering(1020) 00:11:18.948 fused_ordering(1021) 00:11:18.948 fused_ordering(1022) 00:11:18.948 fused_ordering(1023) 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:18.948 rmmod nvme_tcp 00:11:18.948 rmmod nvme_fabrics 00:11:18.948 rmmod nvme_keyring 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 76101 ']' 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 76101 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 76101 ']' 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 76101 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76101 00:11:18.948 killing process with pid 76101 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76101' 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 76101 00:11:18.948 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 76101 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:19.206 00:11:19.206 real 0m4.096s 00:11:19.206 user 0m4.959s 00:11:19.206 sys 0m1.323s 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.206 ************************************ 00:11:19.206 END TEST nvmf_fused_ordering 00:11:19.206 ************************************ 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.206 ************************************ 00:11:19.206 START TEST nvmf_ns_masking 00:11:19.206 ************************************ 00:11:19.206 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:19.206 * Looking for test storage... 00:11:19.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:19.206 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:19.206 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=11fdb595-e0e9-4f8d-99f3-f686c2b42c74 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=62a10a13-14a8-46b8-aead-ccaec3a79518 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=620a4dd6-af1d-434d-a686-cc4007f7056e 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.465 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:19.466 Cannot find device "nvmf_tgt_br" 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:19.466 Cannot find device "nvmf_tgt_br2" 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:19.466 Cannot find device "nvmf_tgt_br" 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:19.466 Cannot find device "nvmf_tgt_br2" 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:19.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:19.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:19.466 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:19.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:11:19.724 00:11:19.724 --- 10.0.0.2 ping statistics --- 00:11:19.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.724 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:19.724 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:19.724 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:11:19.724 00:11:19.724 --- 10.0.0.3 ping statistics --- 00:11:19.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.724 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:19.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:19.724 00:11:19.724 --- 10.0.0.1 ping statistics --- 00:11:19.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.724 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=76367 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 76367 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 76367 ']' 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.724 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.725 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:19.725 [2024-07-25 12:18:34.534368] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:11:19.725 [2024-07-25 12:18:34.534485] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.983 [2024-07-25 12:18:34.677157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.983 [2024-07-25 12:18:34.807191] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.983 [2024-07-25 12:18:34.807266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.983 [2024-07-25 12:18:34.807280] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.983 [2024-07-25 12:18:34.807291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.983 [2024-07-25 12:18:34.807335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.983 [2024-07-25 12:18:34.807370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.916 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.916 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:20.916 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:20.916 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:20.916 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:20.916 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.916 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:21.176 [2024-07-25 12:18:35.788922] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.176 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:21.176 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:21.176 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:21.434 Malloc1 00:11:21.434 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:21.693 Malloc2 00:11:21.693 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:21.951 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:22.209 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.467 [2024-07-25 12:18:37.208166] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.467 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:22.467 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 620a4dd6-af1d-434d-a686-cc4007f7056e -a 10.0.0.2 -s 4420 -i 4 00:11:22.724 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.724 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:22.724 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.724 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:22.724 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.665 [ 0]:0x1 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32751d7aace0479c9fc11dfe59b4132b 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32751d7aace0479c9fc11dfe59b4132b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.665 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:24.922 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:24.922 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.922 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:24.922 [ 0]:0x1 00:11:24.922 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.922 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32751d7aace0479c9fc11dfe59b4132b 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32751d7aace0479c9fc11dfe59b4132b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.180 [ 1]:0x2 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24332d04189f49d6bb611c8e676c4268 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24332d04189f49d6bb611c8e676c4268 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.180 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.437 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:26.002 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:26.002 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 620a4dd6-af1d-434d-a686-cc4007f7056e -a 10.0.0.2 -s 4420 -i 4 00:11:26.002 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:26.002 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:26.002 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.002 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:26.002 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:26.002 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.900 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:28.159 [ 0]:0x2 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24332d04189f49d6bb611c8e676c4268 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24332d04189f49d6bb611c8e676c4268 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:28.159 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:28.417 [ 0]:0x1 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32751d7aace0479c9fc11dfe59b4132b 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32751d7aace0479c9fc11dfe59b4132b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:28.417 [ 1]:0x2 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:28.417 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:28.683 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24332d04189f49d6bb611c8e676c4268 00:11:28.683 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24332d04189f49d6bb611c8e676c4268 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:28.683 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:28.946 [ 0]:0x2 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24332d04189f49d6bb611c8e676c4268 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24332d04189f49d6bb611c8e676c4268 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.946 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:29.204 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:29.204 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 620a4dd6-af1d-434d-a686-cc4007f7056e -a 10.0.0.2 -s 4420 -i 4 00:11:29.462 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:29.462 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:29.462 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.462 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:29.462 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:29.462 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:31.359 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:31.359 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:31.359 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.359 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:31.359 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.359 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:31.359 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:31.359 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:31.617 [ 0]:0x1 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32751d7aace0479c9fc11dfe59b4132b 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32751d7aace0479c9fc11dfe59b4132b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:31.617 [ 1]:0x2 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24332d04189f49d6bb611c8e676c4268 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24332d04189f49d6bb611c8e676c4268 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:31.617 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:31.876 [ 0]:0x2 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:31.876 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24332d04189f49d6bb611c8e676c4268 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24332d04189f49d6bb611c8e676c4268 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:32.134 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:32.134 [2024-07-25 12:18:46.989724] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:32.134 2024/07/25 12:18:46 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:11:32.134 request: 00:11:32.134 { 00:11:32.134 "method": "nvmf_ns_remove_host", 00:11:32.134 "params": { 00:11:32.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:32.134 "nsid": 2, 00:11:32.134 "host": "nqn.2016-06.io.spdk:host1" 00:11:32.134 } 00:11:32.134 } 00:11:32.134 Got JSON-RPC error response 00:11:32.134 GoRPCClient: error on JSON-RPC call 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:32.393 [ 0]:0x2 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=24332d04189f49d6bb611c8e676c4268 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 24332d04189f49d6bb611c8e676c4268 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76753 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76753 /var/tmp/host.sock 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 76753 ']' 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.393 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:32.393 [2024-07-25 12:18:47.244940] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:11:32.394 [2024-07-25 12:18:47.245063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76753 ] 00:11:32.652 [2024-07-25 12:18:47.384120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.652 [2024-07-25 12:18:47.507688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.586 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.586 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:33.586 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.845 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:34.103 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 11fdb595-e0e9-4f8d-99f3-f686c2b42c74 00:11:34.103 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:34.103 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 11FDB595E0E94F8D99F3F686C2B42C74 -i 00:11:34.361 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 62a10a13-14a8-46b8-aead-ccaec3a79518 00:11:34.361 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:34.361 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 62A10A1314A846B8AEADCCAEC3A79518 -i 00:11:34.619 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:34.877 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:35.134 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:35.135 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:35.392 nvme0n1 00:11:35.392 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:35.392 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:35.958 nvme1n2 00:11:35.958 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:35.958 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:35.958 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:35.958 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:35.958 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:35.958 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:35.958 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:35.958 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:35.958 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:36.523 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 11fdb595-e0e9-4f8d-99f3-f686c2b42c74 == \1\1\f\d\b\5\9\5\-\e\0\e\9\-\4\f\8\d\-\9\9\f\3\-\f\6\8\6\c\2\b\4\2\c\7\4 ]] 00:11:36.523 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:36.523 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:36.523 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 62a10a13-14a8-46b8-aead-ccaec3a79518 == \6\2\a\1\0\a\1\3\-\1\4\a\8\-\4\6\b\8\-\a\e\a\d\-\c\c\a\e\c\3\a\7\9\5\1\8 ]] 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 76753 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 76753 ']' 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 76753 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76753 00:11:36.782 killing process with pid 76753 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76753' 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 76753 00:11:36.782 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 76753 00:11:37.040 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.298 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:37.298 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:37.298 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:37.298 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:37.298 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:37.298 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:37.298 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:37.298 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:37.298 rmmod nvme_tcp 00:11:37.298 rmmod nvme_fabrics 00:11:37.298 rmmod nvme_keyring 00:11:37.555 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 76367 ']' 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 76367 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 76367 ']' 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 76367 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76367 00:11:37.556 killing process with pid 76367 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76367' 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 76367 00:11:37.556 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 76367 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:37.814 00:11:37.814 real 0m18.540s 00:11:37.814 user 0m29.618s 00:11:37.814 sys 0m2.833s 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:37.814 ************************************ 00:11:37.814 END TEST nvmf_ns_masking 00:11:37.814 ************************************ 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:37.814 ************************************ 00:11:37.814 START TEST nvmf_auth_target 00:11:37.814 ************************************ 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:37.814 * Looking for test storage... 00:11:37.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.814 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:37.815 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.815 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.073 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.073 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.073 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.073 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.073 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:38.074 Cannot find device "nvmf_tgt_br" 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:38.074 Cannot find device "nvmf_tgt_br2" 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:38.074 Cannot find device "nvmf_tgt_br" 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:38.074 Cannot find device "nvmf_tgt_br2" 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:38.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:38.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:38.074 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:38.332 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:38.332 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:38.332 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:38.332 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:38.332 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:38.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:11:38.332 00:11:38.332 --- 10.0.0.2 ping statistics --- 00:11:38.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.332 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:38.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:38.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:11:38.332 00:11:38.332 --- 10.0.0.3 ping statistics --- 00:11:38.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.332 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:38.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:38.332 00:11:38.332 --- 10.0.0.1 ping statistics --- 00:11:38.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.332 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77113 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77113 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77113 ']' 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.332 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77157 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5d555e10c16114a6b8c95c3201ecbdc4e4b8d2f89720aa5b 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ByC 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5d555e10c16114a6b8c95c3201ecbdc4e4b8d2f89720aa5b 0 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5d555e10c16114a6b8c95c3201ecbdc4e4b8d2f89720aa5b 0 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5d555e10c16114a6b8c95c3201ecbdc4e4b8d2f89720aa5b 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ByC 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ByC 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ByC 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1488270db6b0ecd785adfa0ef18209a66cec9f68ef837b8067a91c48bed42d2f 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.soU 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1488270db6b0ecd785adfa0ef18209a66cec9f68ef837b8067a91c48bed42d2f 3 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1488270db6b0ecd785adfa0ef18209a66cec9f68ef837b8067a91c48bed42d2f 3 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1488270db6b0ecd785adfa0ef18209a66cec9f68ef837b8067a91c48bed42d2f 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.soU 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.soU 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.soU 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=74a58dca80754e0fd82210d2158c27f3 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GR2 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 74a58dca80754e0fd82210d2158c27f3 1 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 74a58dca80754e0fd82210d2158c27f3 1 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=74a58dca80754e0fd82210d2158c27f3 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GR2 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GR2 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.GR2 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c13e38c37993cde309b0e461a30869488479df7fdd1186bd 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JmW 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c13e38c37993cde309b0e461a30869488479df7fdd1186bd 2 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c13e38c37993cde309b0e461a30869488479df7fdd1186bd 2 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c13e38c37993cde309b0e461a30869488479df7fdd1186bd 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JmW 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JmW 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.JmW 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:39.715 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9b6d0867727131db66e5b3050e551433b180779df58bfb95 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dvu 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9b6d0867727131db66e5b3050e551433b180779df58bfb95 2 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9b6d0867727131db66e5b3050e551433b180779df58bfb95 2 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9b6d0867727131db66e5b3050e551433b180779df58bfb95 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dvu 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dvu 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.dvu 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=640cb5f64860be9767fccd0b225e40f7 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Tzy 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 640cb5f64860be9767fccd0b225e40f7 1 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 640cb5f64860be9767fccd0b225e40f7 1 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=640cb5f64860be9767fccd0b225e40f7 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Tzy 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Tzy 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Tzy 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5e0c01b641321947be89449de682153b0c4a25da865cceeadb3c4bdbe865aa3d 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Pfy 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5e0c01b641321947be89449de682153b0c4a25da865cceeadb3c4bdbe865aa3d 3 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5e0c01b641321947be89449de682153b0c4a25da865cceeadb3c4bdbe865aa3d 3 00:11:39.716 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5e0c01b641321947be89449de682153b0c4a25da865cceeadb3c4bdbe865aa3d 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Pfy 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Pfy 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Pfy 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77113 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77113 ']' 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.975 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.233 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:40.233 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:40.233 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77157 /var/tmp/host.sock 00:11:40.233 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77157 ']' 00:11:40.233 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:40.234 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:40.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:40.234 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:40.234 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:40.234 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.491 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ByC 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ByC 00:11:40.492 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ByC 00:11:40.749 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.soU ]] 00:11:40.749 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.soU 00:11:40.749 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.749 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.749 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.749 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.soU 00:11:40.749 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.soU 00:11:41.007 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:41.007 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.GR2 00:11:41.007 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.007 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.007 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.007 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.GR2 00:11:41.007 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.GR2 00:11:41.265 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.JmW ]] 00:11:41.265 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JmW 00:11:41.265 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.265 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.265 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.265 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JmW 00:11:41.265 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JmW 00:11:41.523 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:41.523 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.dvu 00:11:41.523 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.523 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.523 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.523 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.dvu 00:11:41.523 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.dvu 00:11:41.781 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Tzy ]] 00:11:41.781 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tzy 00:11:41.781 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.781 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.781 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.781 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tzy 00:11:41.781 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tzy 00:11:42.038 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:42.039 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Pfy 00:11:42.039 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.039 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.039 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.039 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Pfy 00:11:42.039 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Pfy 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.619 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.186 00:11:43.186 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.186 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.186 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.444 { 00:11:43.444 "auth": { 00:11:43.444 "dhgroup": "null", 00:11:43.444 "digest": "sha256", 00:11:43.444 "state": "completed" 00:11:43.444 }, 00:11:43.444 "cntlid": 1, 00:11:43.444 "listen_address": { 00:11:43.444 "adrfam": "IPv4", 00:11:43.444 "traddr": "10.0.0.2", 00:11:43.444 "trsvcid": "4420", 00:11:43.444 "trtype": "TCP" 00:11:43.444 }, 00:11:43.444 "peer_address": { 00:11:43.444 "adrfam": "IPv4", 00:11:43.444 "traddr": "10.0.0.1", 00:11:43.444 "trsvcid": "39284", 00:11:43.444 "trtype": "TCP" 00:11:43.444 }, 00:11:43.444 "qid": 0, 00:11:43.444 "state": "enabled", 00:11:43.444 "thread": "nvmf_tgt_poll_group_000" 00:11:43.444 } 00:11:43.444 ]' 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.444 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.703 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.966 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.967 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.967 00:11:48.967 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.967 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.967 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.224 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.224 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.224 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.224 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.224 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.224 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.224 { 00:11:49.224 "auth": { 00:11:49.224 "dhgroup": "null", 00:11:49.224 "digest": "sha256", 00:11:49.224 "state": "completed" 00:11:49.224 }, 00:11:49.224 "cntlid": 3, 00:11:49.224 "listen_address": { 00:11:49.224 "adrfam": "IPv4", 00:11:49.224 "traddr": "10.0.0.2", 00:11:49.224 "trsvcid": "4420", 00:11:49.224 "trtype": "TCP" 00:11:49.224 }, 00:11:49.224 "peer_address": { 00:11:49.224 "adrfam": "IPv4", 00:11:49.224 "traddr": "10.0.0.1", 00:11:49.224 "trsvcid": "49974", 00:11:49.224 "trtype": "TCP" 00:11:49.224 }, 00:11:49.224 "qid": 0, 00:11:49.224 "state": "enabled", 00:11:49.224 "thread": "nvmf_tgt_poll_group_000" 00:11:49.224 } 00:11:49.224 ]' 00:11:49.224 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.224 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:49.224 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.482 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:49.482 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.482 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.482 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.482 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.739 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:11:50.361 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.361 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:50.361 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.361 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.361 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.361 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.361 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:50.361 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.928 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.928 00:11:51.187 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.187 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.187 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.187 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.187 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.187 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.187 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.445 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.445 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.445 { 00:11:51.445 "auth": { 00:11:51.445 "dhgroup": "null", 00:11:51.445 "digest": "sha256", 00:11:51.445 "state": "completed" 00:11:51.445 }, 00:11:51.445 "cntlid": 5, 00:11:51.445 "listen_address": { 00:11:51.445 "adrfam": "IPv4", 00:11:51.445 "traddr": "10.0.0.2", 00:11:51.445 "trsvcid": "4420", 00:11:51.445 "trtype": "TCP" 00:11:51.445 }, 00:11:51.445 "peer_address": { 00:11:51.445 "adrfam": "IPv4", 00:11:51.445 "traddr": "10.0.0.1", 00:11:51.445 "trsvcid": "50012", 00:11:51.445 "trtype": "TCP" 00:11:51.445 }, 00:11:51.445 "qid": 0, 00:11:51.445 "state": "enabled", 00:11:51.445 "thread": "nvmf_tgt_poll_group_000" 00:11:51.445 } 00:11:51.445 ]' 00:11:51.445 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.445 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.445 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.445 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:51.445 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.446 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.446 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.446 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.704 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:11:52.271 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.271 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:52.271 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.271 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.271 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.271 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:52.271 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.530 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:53.097 00:11:53.097 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.097 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.097 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.355 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.355 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.355 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.355 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.355 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.355 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.355 { 00:11:53.355 "auth": { 00:11:53.355 "dhgroup": "null", 00:11:53.355 "digest": "sha256", 00:11:53.355 "state": "completed" 00:11:53.355 }, 00:11:53.355 "cntlid": 7, 00:11:53.355 "listen_address": { 00:11:53.355 "adrfam": "IPv4", 00:11:53.355 "traddr": "10.0.0.2", 00:11:53.355 "trsvcid": "4420", 00:11:53.355 "trtype": "TCP" 00:11:53.355 }, 00:11:53.355 "peer_address": { 00:11:53.355 "adrfam": "IPv4", 00:11:53.355 "traddr": "10.0.0.1", 00:11:53.355 "trsvcid": "50036", 00:11:53.355 "trtype": "TCP" 00:11:53.355 }, 00:11:53.355 "qid": 0, 00:11:53.355 "state": "enabled", 00:11:53.355 "thread": "nvmf_tgt_poll_group_000" 00:11:53.355 } 00:11:53.355 ]' 00:11:53.355 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.355 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:53.355 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.355 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:53.355 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.355 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.355 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.355 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.613 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.548 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.807 00:11:54.807 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.807 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.807 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.375 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.375 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.375 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.375 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.375 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.375 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.375 { 00:11:55.375 "auth": { 00:11:55.375 "dhgroup": "ffdhe2048", 00:11:55.375 "digest": "sha256", 00:11:55.375 "state": "completed" 00:11:55.375 }, 00:11:55.375 "cntlid": 9, 00:11:55.375 "listen_address": { 00:11:55.375 "adrfam": "IPv4", 00:11:55.375 "traddr": "10.0.0.2", 00:11:55.375 "trsvcid": "4420", 00:11:55.375 "trtype": "TCP" 00:11:55.375 }, 00:11:55.375 "peer_address": { 00:11:55.375 "adrfam": "IPv4", 00:11:55.375 "traddr": "10.0.0.1", 00:11:55.375 "trsvcid": "60724", 00:11:55.375 "trtype": "TCP" 00:11:55.375 }, 00:11:55.375 "qid": 0, 00:11:55.375 "state": "enabled", 00:11:55.375 "thread": "nvmf_tgt_poll_group_000" 00:11:55.375 } 00:11:55.375 ]' 00:11:55.375 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.375 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.375 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.375 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:55.375 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.375 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.375 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.375 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.633 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.569 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.827 00:11:57.085 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.085 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.085 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.344 { 00:11:57.344 "auth": { 00:11:57.344 "dhgroup": "ffdhe2048", 00:11:57.344 "digest": "sha256", 00:11:57.344 "state": "completed" 00:11:57.344 }, 00:11:57.344 "cntlid": 11, 00:11:57.344 "listen_address": { 00:11:57.344 "adrfam": "IPv4", 00:11:57.344 "traddr": "10.0.0.2", 00:11:57.344 "trsvcid": "4420", 00:11:57.344 "trtype": "TCP" 00:11:57.344 }, 00:11:57.344 "peer_address": { 00:11:57.344 "adrfam": "IPv4", 00:11:57.344 "traddr": "10.0.0.1", 00:11:57.344 "trsvcid": "60746", 00:11:57.344 "trtype": "TCP" 00:11:57.344 }, 00:11:57.344 "qid": 0, 00:11:57.344 "state": "enabled", 00:11:57.344 "thread": "nvmf_tgt_poll_group_000" 00:11:57.344 } 00:11:57.344 ]' 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.344 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.602 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.572 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:58.573 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:58.573 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:58.573 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.573 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.573 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.573 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.573 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.573 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.573 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.141 00:11:59.141 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.141 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.141 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.400 { 00:11:59.400 "auth": { 00:11:59.400 "dhgroup": "ffdhe2048", 00:11:59.400 "digest": "sha256", 00:11:59.400 "state": "completed" 00:11:59.400 }, 00:11:59.400 "cntlid": 13, 00:11:59.400 "listen_address": { 00:11:59.400 "adrfam": "IPv4", 00:11:59.400 "traddr": "10.0.0.2", 00:11:59.400 "trsvcid": "4420", 00:11:59.400 "trtype": "TCP" 00:11:59.400 }, 00:11:59.400 "peer_address": { 00:11:59.400 "adrfam": "IPv4", 00:11:59.400 "traddr": "10.0.0.1", 00:11:59.400 "trsvcid": "60772", 00:11:59.400 "trtype": "TCP" 00:11:59.400 }, 00:11:59.400 "qid": 0, 00:11:59.400 "state": "enabled", 00:11:59.400 "thread": "nvmf_tgt_poll_group_000" 00:11:59.400 } 00:11:59.400 ]' 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.400 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.659 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:12:00.595 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.595 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:00.595 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.595 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.595 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.595 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.595 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:00.595 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:00.853 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.112 00:12:01.112 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.112 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.112 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.370 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.370 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.370 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.370 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.370 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.370 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.370 { 00:12:01.370 "auth": { 00:12:01.370 "dhgroup": "ffdhe2048", 00:12:01.370 "digest": "sha256", 00:12:01.370 "state": "completed" 00:12:01.370 }, 00:12:01.370 "cntlid": 15, 00:12:01.370 "listen_address": { 00:12:01.370 "adrfam": "IPv4", 00:12:01.370 "traddr": "10.0.0.2", 00:12:01.370 "trsvcid": "4420", 00:12:01.370 "trtype": "TCP" 00:12:01.370 }, 00:12:01.370 "peer_address": { 00:12:01.370 "adrfam": "IPv4", 00:12:01.370 "traddr": "10.0.0.1", 00:12:01.370 "trsvcid": "60798", 00:12:01.370 "trtype": "TCP" 00:12:01.370 }, 00:12:01.370 "qid": 0, 00:12:01.370 "state": "enabled", 00:12:01.370 "thread": "nvmf_tgt_poll_group_000" 00:12:01.370 } 00:12:01.370 ]' 00:12:01.370 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.628 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.628 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.628 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:01.628 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.628 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.628 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.628 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.887 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:12:02.454 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.454 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:02.454 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.454 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.454 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.454 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:02.454 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.454 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:02.454 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.712 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.280 00:12:03.280 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.280 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.280 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.538 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.538 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.538 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.538 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.538 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.538 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.538 { 00:12:03.538 "auth": { 00:12:03.538 "dhgroup": "ffdhe3072", 00:12:03.539 "digest": "sha256", 00:12:03.539 "state": "completed" 00:12:03.539 }, 00:12:03.539 "cntlid": 17, 00:12:03.539 "listen_address": { 00:12:03.539 "adrfam": "IPv4", 00:12:03.539 "traddr": "10.0.0.2", 00:12:03.539 "trsvcid": "4420", 00:12:03.539 "trtype": "TCP" 00:12:03.539 }, 00:12:03.539 "peer_address": { 00:12:03.539 "adrfam": "IPv4", 00:12:03.539 "traddr": "10.0.0.1", 00:12:03.539 "trsvcid": "60818", 00:12:03.539 "trtype": "TCP" 00:12:03.539 }, 00:12:03.539 "qid": 0, 00:12:03.539 "state": "enabled", 00:12:03.539 "thread": "nvmf_tgt_poll_group_000" 00:12:03.539 } 00:12:03.539 ]' 00:12:03.539 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.539 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.539 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.539 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:03.539 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.796 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.796 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.796 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.796 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:12:04.739 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.739 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:04.739 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.739 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.739 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.739 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.739 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:04.739 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.997 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.256 00:12:05.256 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.256 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.256 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.514 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.514 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.514 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.514 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.514 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.514 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.514 { 00:12:05.514 "auth": { 00:12:05.514 "dhgroup": "ffdhe3072", 00:12:05.514 "digest": "sha256", 00:12:05.514 "state": "completed" 00:12:05.514 }, 00:12:05.514 "cntlid": 19, 00:12:05.514 "listen_address": { 00:12:05.514 "adrfam": "IPv4", 00:12:05.514 "traddr": "10.0.0.2", 00:12:05.514 "trsvcid": "4420", 00:12:05.514 "trtype": "TCP" 00:12:05.514 }, 00:12:05.514 "peer_address": { 00:12:05.514 "adrfam": "IPv4", 00:12:05.514 "traddr": "10.0.0.1", 00:12:05.514 "trsvcid": "43326", 00:12:05.514 "trtype": "TCP" 00:12:05.514 }, 00:12:05.514 "qid": 0, 00:12:05.514 "state": "enabled", 00:12:05.514 "thread": "nvmf_tgt_poll_group_000" 00:12:05.514 } 00:12:05.514 ]' 00:12:05.514 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.514 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:05.514 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.773 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:05.773 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.773 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.773 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.773 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.031 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:12:06.969 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.969 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:06.969 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.969 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.969 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.969 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.969 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:06.969 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.229 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.486 00:12:07.486 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.487 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.487 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.745 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.745 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.745 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.745 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.745 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.745 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.745 { 00:12:07.745 "auth": { 00:12:07.745 "dhgroup": "ffdhe3072", 00:12:07.745 "digest": "sha256", 00:12:07.745 "state": "completed" 00:12:07.745 }, 00:12:07.745 "cntlid": 21, 00:12:07.745 "listen_address": { 00:12:07.745 "adrfam": "IPv4", 00:12:07.745 "traddr": "10.0.0.2", 00:12:07.745 "trsvcid": "4420", 00:12:07.745 "trtype": "TCP" 00:12:07.745 }, 00:12:07.745 "peer_address": { 00:12:07.745 "adrfam": "IPv4", 00:12:07.745 "traddr": "10.0.0.1", 00:12:07.745 "trsvcid": "43344", 00:12:07.745 "trtype": "TCP" 00:12:07.745 }, 00:12:07.745 "qid": 0, 00:12:07.745 "state": "enabled", 00:12:07.745 "thread": "nvmf_tgt_poll_group_000" 00:12:07.745 } 00:12:07.745 ]' 00:12:07.745 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.745 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:07.745 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.005 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:08.005 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.005 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.005 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.005 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.263 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:12:08.829 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.829 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:08.829 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.829 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.829 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.830 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.830 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:08.830 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:09.087 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:09.654 00:12:09.654 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.654 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.654 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.913 { 00:12:09.913 "auth": { 00:12:09.913 "dhgroup": "ffdhe3072", 00:12:09.913 "digest": "sha256", 00:12:09.913 "state": "completed" 00:12:09.913 }, 00:12:09.913 "cntlid": 23, 00:12:09.913 "listen_address": { 00:12:09.913 "adrfam": "IPv4", 00:12:09.913 "traddr": "10.0.0.2", 00:12:09.913 "trsvcid": "4420", 00:12:09.913 "trtype": "TCP" 00:12:09.913 }, 00:12:09.913 "peer_address": { 00:12:09.913 "adrfam": "IPv4", 00:12:09.913 "traddr": "10.0.0.1", 00:12:09.913 "trsvcid": "43376", 00:12:09.913 "trtype": "TCP" 00:12:09.913 }, 00:12:09.913 "qid": 0, 00:12:09.913 "state": "enabled", 00:12:09.913 "thread": "nvmf_tgt_poll_group_000" 00:12:09.913 } 00:12:09.913 ]' 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.913 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.171 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:12:11.106 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.106 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:11.106 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.106 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.106 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.106 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.106 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.106 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:11.106 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.365 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.931 00:12:11.931 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.931 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.931 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.189 { 00:12:12.189 "auth": { 00:12:12.189 "dhgroup": "ffdhe4096", 00:12:12.189 "digest": "sha256", 00:12:12.189 "state": "completed" 00:12:12.189 }, 00:12:12.189 "cntlid": 25, 00:12:12.189 "listen_address": { 00:12:12.189 "adrfam": "IPv4", 00:12:12.189 "traddr": "10.0.0.2", 00:12:12.189 "trsvcid": "4420", 00:12:12.189 "trtype": "TCP" 00:12:12.189 }, 00:12:12.189 "peer_address": { 00:12:12.189 "adrfam": "IPv4", 00:12:12.189 "traddr": "10.0.0.1", 00:12:12.189 "trsvcid": "43394", 00:12:12.189 "trtype": "TCP" 00:12:12.189 }, 00:12:12.189 "qid": 0, 00:12:12.189 "state": "enabled", 00:12:12.189 "thread": "nvmf_tgt_poll_group_000" 00:12:12.189 } 00:12:12.189 ]' 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.189 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.448 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:12:13.382 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.382 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:13.382 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.382 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.382 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.382 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.382 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:13.382 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:13.382 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:13.382 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.383 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:13.383 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:13.383 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:13.383 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.383 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.383 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.383 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.383 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.383 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.383 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.641 00:12:13.899 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.899 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.899 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.158 { 00:12:14.158 "auth": { 00:12:14.158 "dhgroup": "ffdhe4096", 00:12:14.158 "digest": "sha256", 00:12:14.158 "state": "completed" 00:12:14.158 }, 00:12:14.158 "cntlid": 27, 00:12:14.158 "listen_address": { 00:12:14.158 "adrfam": "IPv4", 00:12:14.158 "traddr": "10.0.0.2", 00:12:14.158 "trsvcid": "4420", 00:12:14.158 "trtype": "TCP" 00:12:14.158 }, 00:12:14.158 "peer_address": { 00:12:14.158 "adrfam": "IPv4", 00:12:14.158 "traddr": "10.0.0.1", 00:12:14.158 "trsvcid": "57992", 00:12:14.158 "trtype": "TCP" 00:12:14.158 }, 00:12:14.158 "qid": 0, 00:12:14.158 "state": "enabled", 00:12:14.158 "thread": "nvmf_tgt_poll_group_000" 00:12:14.158 } 00:12:14.158 ]' 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.158 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.416 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:12:15.351 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.351 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:15.351 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.351 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.351 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.351 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.351 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:15.351 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.609 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.866 00:12:15.866 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.866 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.866 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.124 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.124 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.124 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.124 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.381 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.381 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.381 { 00:12:16.382 "auth": { 00:12:16.382 "dhgroup": "ffdhe4096", 00:12:16.382 "digest": "sha256", 00:12:16.382 "state": "completed" 00:12:16.382 }, 00:12:16.382 "cntlid": 29, 00:12:16.382 "listen_address": { 00:12:16.382 "adrfam": "IPv4", 00:12:16.382 "traddr": "10.0.0.2", 00:12:16.382 "trsvcid": "4420", 00:12:16.382 "trtype": "TCP" 00:12:16.382 }, 00:12:16.382 "peer_address": { 00:12:16.382 "adrfam": "IPv4", 00:12:16.382 "traddr": "10.0.0.1", 00:12:16.382 "trsvcid": "58020", 00:12:16.382 "trtype": "TCP" 00:12:16.382 }, 00:12:16.382 "qid": 0, 00:12:16.382 "state": "enabled", 00:12:16.382 "thread": "nvmf_tgt_poll_group_000" 00:12:16.382 } 00:12:16.382 ]' 00:12:16.382 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.382 12:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:16.382 12:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.382 12:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.382 12:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.382 12:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.382 12:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.382 12:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.640 12:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:12:17.574 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.574 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:17.574 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.574 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.574 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.574 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.574 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:17.574 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:17.832 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.091 00:12:18.354 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.354 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.354 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.636 { 00:12:18.636 "auth": { 00:12:18.636 "dhgroup": "ffdhe4096", 00:12:18.636 "digest": "sha256", 00:12:18.636 "state": "completed" 00:12:18.636 }, 00:12:18.636 "cntlid": 31, 00:12:18.636 "listen_address": { 00:12:18.636 "adrfam": "IPv4", 00:12:18.636 "traddr": "10.0.0.2", 00:12:18.636 "trsvcid": "4420", 00:12:18.636 "trtype": "TCP" 00:12:18.636 }, 00:12:18.636 "peer_address": { 00:12:18.636 "adrfam": "IPv4", 00:12:18.636 "traddr": "10.0.0.1", 00:12:18.636 "trsvcid": "58058", 00:12:18.636 "trtype": "TCP" 00:12:18.636 }, 00:12:18.636 "qid": 0, 00:12:18.636 "state": "enabled", 00:12:18.636 "thread": "nvmf_tgt_poll_group_000" 00:12:18.636 } 00:12:18.636 ]' 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.636 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.896 12:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:12:19.829 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.829 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:19.829 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.829 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.829 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.829 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:19.829 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.829 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:19.829 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.087 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.653 00:12:20.653 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.653 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.653 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.911 { 00:12:20.911 "auth": { 00:12:20.911 "dhgroup": "ffdhe6144", 00:12:20.911 "digest": "sha256", 00:12:20.911 "state": "completed" 00:12:20.911 }, 00:12:20.911 "cntlid": 33, 00:12:20.911 "listen_address": { 00:12:20.911 "adrfam": "IPv4", 00:12:20.911 "traddr": "10.0.0.2", 00:12:20.911 "trsvcid": "4420", 00:12:20.911 "trtype": "TCP" 00:12:20.911 }, 00:12:20.911 "peer_address": { 00:12:20.911 "adrfam": "IPv4", 00:12:20.911 "traddr": "10.0.0.1", 00:12:20.911 "trsvcid": "58086", 00:12:20.911 "trtype": "TCP" 00:12:20.911 }, 00:12:20.911 "qid": 0, 00:12:20.911 "state": "enabled", 00:12:20.911 "thread": "nvmf_tgt_poll_group_000" 00:12:20.911 } 00:12:20.911 ]' 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.911 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.478 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:12:22.046 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.046 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:22.046 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.046 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.046 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.046 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.046 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:22.046 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:22.304 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:22.304 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.304 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:22.304 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:22.304 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:22.304 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.305 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.305 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.305 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.305 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.305 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.305 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.871 00:12:22.871 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.871 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.871 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.128 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.128 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.128 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.128 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.128 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.128 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.128 { 00:12:23.128 "auth": { 00:12:23.128 "dhgroup": "ffdhe6144", 00:12:23.128 "digest": "sha256", 00:12:23.128 "state": "completed" 00:12:23.128 }, 00:12:23.128 "cntlid": 35, 00:12:23.128 "listen_address": { 00:12:23.128 "adrfam": "IPv4", 00:12:23.128 "traddr": "10.0.0.2", 00:12:23.128 "trsvcid": "4420", 00:12:23.128 "trtype": "TCP" 00:12:23.128 }, 00:12:23.128 "peer_address": { 00:12:23.129 "adrfam": "IPv4", 00:12:23.129 "traddr": "10.0.0.1", 00:12:23.129 "trsvcid": "58100", 00:12:23.129 "trtype": "TCP" 00:12:23.129 }, 00:12:23.129 "qid": 0, 00:12:23.129 "state": "enabled", 00:12:23.129 "thread": "nvmf_tgt_poll_group_000" 00:12:23.129 } 00:12:23.129 ]' 00:12:23.129 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.129 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:23.129 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.129 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:23.129 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.387 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.387 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.387 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.645 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:12:24.211 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.211 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:24.211 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.211 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.211 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.211 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.211 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:24.211 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.469 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.036 00:12:25.036 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.036 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.036 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.307 { 00:12:25.307 "auth": { 00:12:25.307 "dhgroup": "ffdhe6144", 00:12:25.307 "digest": "sha256", 00:12:25.307 "state": "completed" 00:12:25.307 }, 00:12:25.307 "cntlid": 37, 00:12:25.307 "listen_address": { 00:12:25.307 "adrfam": "IPv4", 00:12:25.307 "traddr": "10.0.0.2", 00:12:25.307 "trsvcid": "4420", 00:12:25.307 "trtype": "TCP" 00:12:25.307 }, 00:12:25.307 "peer_address": { 00:12:25.307 "adrfam": "IPv4", 00:12:25.307 "traddr": "10.0.0.1", 00:12:25.307 "trsvcid": "59210", 00:12:25.307 "trtype": "TCP" 00:12:25.307 }, 00:12:25.307 "qid": 0, 00:12:25.307 "state": "enabled", 00:12:25.307 "thread": "nvmf_tgt_poll_group_000" 00:12:25.307 } 00:12:25.307 ]' 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:25.307 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.578 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.578 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.578 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.578 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:12:26.514 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.514 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:26.514 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.514 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.514 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.514 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.514 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:26.514 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.771 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:27.334 00:12:27.334 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.334 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.334 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.591 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.592 { 00:12:27.592 "auth": { 00:12:27.592 "dhgroup": "ffdhe6144", 00:12:27.592 "digest": "sha256", 00:12:27.592 "state": "completed" 00:12:27.592 }, 00:12:27.592 "cntlid": 39, 00:12:27.592 "listen_address": { 00:12:27.592 "adrfam": "IPv4", 00:12:27.592 "traddr": "10.0.0.2", 00:12:27.592 "trsvcid": "4420", 00:12:27.592 "trtype": "TCP" 00:12:27.592 }, 00:12:27.592 "peer_address": { 00:12:27.592 "adrfam": "IPv4", 00:12:27.592 "traddr": "10.0.0.1", 00:12:27.592 "trsvcid": "59238", 00:12:27.592 "trtype": "TCP" 00:12:27.592 }, 00:12:27.592 "qid": 0, 00:12:27.592 "state": "enabled", 00:12:27.592 "thread": "nvmf_tgt_poll_group_000" 00:12:27.592 } 00:12:27.592 ]' 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.592 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.850 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:12:28.783 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.783 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:28.783 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.783 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.783 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.783 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:28.783 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.783 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:28.783 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:29.042 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:12:29.042 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.042 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:29.043 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:29.043 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:29.043 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.043 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.043 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.043 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.043 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.043 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.043 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.607 00:12:29.607 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.607 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.607 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.172 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.173 { 00:12:30.173 "auth": { 00:12:30.173 "dhgroup": "ffdhe8192", 00:12:30.173 "digest": "sha256", 00:12:30.173 "state": "completed" 00:12:30.173 }, 00:12:30.173 "cntlid": 41, 00:12:30.173 "listen_address": { 00:12:30.173 "adrfam": "IPv4", 00:12:30.173 "traddr": "10.0.0.2", 00:12:30.173 "trsvcid": "4420", 00:12:30.173 "trtype": "TCP" 00:12:30.173 }, 00:12:30.173 "peer_address": { 00:12:30.173 "adrfam": "IPv4", 00:12:30.173 "traddr": "10.0.0.1", 00:12:30.173 "trsvcid": "59278", 00:12:30.173 "trtype": "TCP" 00:12:30.173 }, 00:12:30.173 "qid": 0, 00:12:30.173 "state": "enabled", 00:12:30.173 "thread": "nvmf_tgt_poll_group_000" 00:12:30.173 } 00:12:30.173 ]' 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.173 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.430 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:12:30.995 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.253 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:31.253 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.253 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.253 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.253 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.253 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:31.253 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.511 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.092 00:12:32.092 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.092 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.092 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.349 { 00:12:32.349 "auth": { 00:12:32.349 "dhgroup": "ffdhe8192", 00:12:32.349 "digest": "sha256", 00:12:32.349 "state": "completed" 00:12:32.349 }, 00:12:32.349 "cntlid": 43, 00:12:32.349 "listen_address": { 00:12:32.349 "adrfam": "IPv4", 00:12:32.349 "traddr": "10.0.0.2", 00:12:32.349 "trsvcid": "4420", 00:12:32.349 "trtype": "TCP" 00:12:32.349 }, 00:12:32.349 "peer_address": { 00:12:32.349 "adrfam": "IPv4", 00:12:32.349 "traddr": "10.0.0.1", 00:12:32.349 "trsvcid": "59304", 00:12:32.349 "trtype": "TCP" 00:12:32.349 }, 00:12:32.349 "qid": 0, 00:12:32.349 "state": "enabled", 00:12:32.349 "thread": "nvmf_tgt_poll_group_000" 00:12:32.349 } 00:12:32.349 ]' 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:32.349 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.611 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.611 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.611 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.869 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:12:33.435 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.435 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:33.435 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.435 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.435 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.435 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.435 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:33.435 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.001 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.566 00:12:34.566 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.566 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.566 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.823 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.823 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.823 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.823 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.823 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.823 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.823 { 00:12:34.823 "auth": { 00:12:34.823 "dhgroup": "ffdhe8192", 00:12:34.823 "digest": "sha256", 00:12:34.823 "state": "completed" 00:12:34.823 }, 00:12:34.823 "cntlid": 45, 00:12:34.823 "listen_address": { 00:12:34.823 "adrfam": "IPv4", 00:12:34.823 "traddr": "10.0.0.2", 00:12:34.823 "trsvcid": "4420", 00:12:34.823 "trtype": "TCP" 00:12:34.823 }, 00:12:34.823 "peer_address": { 00:12:34.823 "adrfam": "IPv4", 00:12:34.823 "traddr": "10.0.0.1", 00:12:34.823 "trsvcid": "59216", 00:12:34.823 "trtype": "TCP" 00:12:34.823 }, 00:12:34.823 "qid": 0, 00:12:34.823 "state": "enabled", 00:12:34.823 "thread": "nvmf_tgt_poll_group_000" 00:12:34.823 } 00:12:34.823 ]' 00:12:34.824 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.824 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:34.824 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.824 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.824 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.081 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.081 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.081 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.339 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:12:35.905 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.905 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:35.905 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.906 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.906 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.906 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.906 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:35.906 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:36.471 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:37.038 00:12:37.038 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.038 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.038 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.296 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.296 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.296 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.296 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.296 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.296 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.296 { 00:12:37.296 "auth": { 00:12:37.296 "dhgroup": "ffdhe8192", 00:12:37.296 "digest": "sha256", 00:12:37.296 "state": "completed" 00:12:37.296 }, 00:12:37.296 "cntlid": 47, 00:12:37.296 "listen_address": { 00:12:37.296 "adrfam": "IPv4", 00:12:37.296 "traddr": "10.0.0.2", 00:12:37.296 "trsvcid": "4420", 00:12:37.296 "trtype": "TCP" 00:12:37.296 }, 00:12:37.296 "peer_address": { 00:12:37.296 "adrfam": "IPv4", 00:12:37.296 "traddr": "10.0.0.1", 00:12:37.296 "trsvcid": "59240", 00:12:37.296 "trtype": "TCP" 00:12:37.296 }, 00:12:37.296 "qid": 0, 00:12:37.296 "state": "enabled", 00:12:37.296 "thread": "nvmf_tgt_poll_group_000" 00:12:37.296 } 00:12:37.296 ]' 00:12:37.296 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.296 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.296 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.296 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:37.296 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.296 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.296 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.296 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.863 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:12:38.475 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.475 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:38.475 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.475 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.475 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.475 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:38.475 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:38.475 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.475 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:38.475 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.733 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.989 00:12:38.990 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.990 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.990 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.248 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.248 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.248 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.248 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.248 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.248 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.248 { 00:12:39.248 "auth": { 00:12:39.248 "dhgroup": "null", 00:12:39.248 "digest": "sha384", 00:12:39.248 "state": "completed" 00:12:39.248 }, 00:12:39.248 "cntlid": 49, 00:12:39.248 "listen_address": { 00:12:39.248 "adrfam": "IPv4", 00:12:39.248 "traddr": "10.0.0.2", 00:12:39.248 "trsvcid": "4420", 00:12:39.248 "trtype": "TCP" 00:12:39.248 }, 00:12:39.248 "peer_address": { 00:12:39.248 "adrfam": "IPv4", 00:12:39.248 "traddr": "10.0.0.1", 00:12:39.248 "trsvcid": "59256", 00:12:39.248 "trtype": "TCP" 00:12:39.248 }, 00:12:39.248 "qid": 0, 00:12:39.248 "state": "enabled", 00:12:39.248 "thread": "nvmf_tgt_poll_group_000" 00:12:39.248 } 00:12:39.248 ]' 00:12:39.248 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.248 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.248 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.248 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:39.248 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.506 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.506 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.506 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.764 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:12:40.334 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.334 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:40.334 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.334 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.334 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.334 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:40.334 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:40.334 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.592 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.161 00:12:41.161 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.161 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.161 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.434 { 00:12:41.434 "auth": { 00:12:41.434 "dhgroup": "null", 00:12:41.434 "digest": "sha384", 00:12:41.434 "state": "completed" 00:12:41.434 }, 00:12:41.434 "cntlid": 51, 00:12:41.434 "listen_address": { 00:12:41.434 "adrfam": "IPv4", 00:12:41.434 "traddr": "10.0.0.2", 00:12:41.434 "trsvcid": "4420", 00:12:41.434 "trtype": "TCP" 00:12:41.434 }, 00:12:41.434 "peer_address": { 00:12:41.434 "adrfam": "IPv4", 00:12:41.434 "traddr": "10.0.0.1", 00:12:41.434 "trsvcid": "59284", 00:12:41.434 "trtype": "TCP" 00:12:41.434 }, 00:12:41.434 "qid": 0, 00:12:41.434 "state": "enabled", 00:12:41.434 "thread": "nvmf_tgt_poll_group_000" 00:12:41.434 } 00:12:41.434 ]' 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.434 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.692 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:12:42.626 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.626 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:42.626 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.626 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.626 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.626 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.626 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:42.626 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.884 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.142 00:12:43.142 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.142 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.142 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.400 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.400 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.400 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.400 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.400 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.400 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.400 { 00:12:43.400 "auth": { 00:12:43.400 "dhgroup": "null", 00:12:43.400 "digest": "sha384", 00:12:43.400 "state": "completed" 00:12:43.400 }, 00:12:43.400 "cntlid": 53, 00:12:43.400 "listen_address": { 00:12:43.400 "adrfam": "IPv4", 00:12:43.400 "traddr": "10.0.0.2", 00:12:43.400 "trsvcid": "4420", 00:12:43.400 "trtype": "TCP" 00:12:43.400 }, 00:12:43.400 "peer_address": { 00:12:43.400 "adrfam": "IPv4", 00:12:43.400 "traddr": "10.0.0.1", 00:12:43.400 "trsvcid": "59318", 00:12:43.400 "trtype": "TCP" 00:12:43.400 }, 00:12:43.400 "qid": 0, 00:12:43.400 "state": "enabled", 00:12:43.400 "thread": "nvmf_tgt_poll_group_000" 00:12:43.400 } 00:12:43.400 ]' 00:12:43.400 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.400 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.401 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.401 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:43.401 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.401 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.401 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.401 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.658 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:12:44.603 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.603 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:44.603 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.603 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.603 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.603 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.603 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:44.603 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.861 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:45.119 00:12:45.119 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.119 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.119 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.486 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.486 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.486 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.486 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.486 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.486 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.486 { 00:12:45.486 "auth": { 00:12:45.486 "dhgroup": "null", 00:12:45.486 "digest": "sha384", 00:12:45.486 "state": "completed" 00:12:45.486 }, 00:12:45.486 "cntlid": 55, 00:12:45.486 "listen_address": { 00:12:45.486 "adrfam": "IPv4", 00:12:45.486 "traddr": "10.0.0.2", 00:12:45.486 "trsvcid": "4420", 00:12:45.486 "trtype": "TCP" 00:12:45.486 }, 00:12:45.486 "peer_address": { 00:12:45.486 "adrfam": "IPv4", 00:12:45.486 "traddr": "10.0.0.1", 00:12:45.486 "trsvcid": "43194", 00:12:45.486 "trtype": "TCP" 00:12:45.486 }, 00:12:45.486 "qid": 0, 00:12:45.486 "state": "enabled", 00:12:45.486 "thread": "nvmf_tgt_poll_group_000" 00:12:45.486 } 00:12:45.486 ]' 00:12:45.486 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.486 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.486 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.750 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:45.750 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.750 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.750 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.750 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.064 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:12:46.698 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.698 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:46.698 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.698 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.698 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.698 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:46.698 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.698 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:46.698 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.979 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.238 00:12:47.238 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.238 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.238 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.496 { 00:12:47.496 "auth": { 00:12:47.496 "dhgroup": "ffdhe2048", 00:12:47.496 "digest": "sha384", 00:12:47.496 "state": "completed" 00:12:47.496 }, 00:12:47.496 "cntlid": 57, 00:12:47.496 "listen_address": { 00:12:47.496 "adrfam": "IPv4", 00:12:47.496 "traddr": "10.0.0.2", 00:12:47.496 "trsvcid": "4420", 00:12:47.496 "trtype": "TCP" 00:12:47.496 }, 00:12:47.496 "peer_address": { 00:12:47.496 "adrfam": "IPv4", 00:12:47.496 "traddr": "10.0.0.1", 00:12:47.496 "trsvcid": "43214", 00:12:47.496 "trtype": "TCP" 00:12:47.496 }, 00:12:47.496 "qid": 0, 00:12:47.496 "state": "enabled", 00:12:47.496 "thread": "nvmf_tgt_poll_group_000" 00:12:47.496 } 00:12:47.496 ]' 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.496 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.754 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:12:48.694 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.695 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:48.695 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.695 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.695 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.695 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.695 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:48.695 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:48.952 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.953 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.210 00:12:49.467 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.467 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.467 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.726 { 00:12:49.726 "auth": { 00:12:49.726 "dhgroup": "ffdhe2048", 00:12:49.726 "digest": "sha384", 00:12:49.726 "state": "completed" 00:12:49.726 }, 00:12:49.726 "cntlid": 59, 00:12:49.726 "listen_address": { 00:12:49.726 "adrfam": "IPv4", 00:12:49.726 "traddr": "10.0.0.2", 00:12:49.726 "trsvcid": "4420", 00:12:49.726 "trtype": "TCP" 00:12:49.726 }, 00:12:49.726 "peer_address": { 00:12:49.726 "adrfam": "IPv4", 00:12:49.726 "traddr": "10.0.0.1", 00:12:49.726 "trsvcid": "43250", 00:12:49.726 "trtype": "TCP" 00:12:49.726 }, 00:12:49.726 "qid": 0, 00:12:49.726 "state": "enabled", 00:12:49.726 "thread": "nvmf_tgt_poll_group_000" 00:12:49.726 } 00:12:49.726 ]' 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.726 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.291 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:12:50.857 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.857 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:50.857 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.857 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.857 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.857 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.857 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:50.858 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:51.116 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:51.116 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.116 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:51.116 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:51.116 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:51.116 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.116 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.116 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.117 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.117 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.117 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.117 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.682 00:12:51.682 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.682 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.682 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.939 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.939 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.939 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.939 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.939 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.939 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.939 { 00:12:51.939 "auth": { 00:12:51.939 "dhgroup": "ffdhe2048", 00:12:51.939 "digest": "sha384", 00:12:51.939 "state": "completed" 00:12:51.939 }, 00:12:51.939 "cntlid": 61, 00:12:51.939 "listen_address": { 00:12:51.939 "adrfam": "IPv4", 00:12:51.939 "traddr": "10.0.0.2", 00:12:51.939 "trsvcid": "4420", 00:12:51.939 "trtype": "TCP" 00:12:51.939 }, 00:12:51.939 "peer_address": { 00:12:51.939 "adrfam": "IPv4", 00:12:51.939 "traddr": "10.0.0.1", 00:12:51.939 "trsvcid": "43286", 00:12:51.939 "trtype": "TCP" 00:12:51.939 }, 00:12:51.939 "qid": 0, 00:12:51.939 "state": "enabled", 00:12:51.939 "thread": "nvmf_tgt_poll_group_000" 00:12:51.939 } 00:12:51.939 ]' 00:12:51.939 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.939 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.939 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.197 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:52.197 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.197 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.197 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.197 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.456 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:12:53.021 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.278 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:53.278 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.278 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.278 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.278 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.278 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:53.278 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.537 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.795 00:12:53.795 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.795 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.795 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.054 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.054 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.054 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.054 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.054 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.054 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.054 { 00:12:54.054 "auth": { 00:12:54.054 "dhgroup": "ffdhe2048", 00:12:54.054 "digest": "sha384", 00:12:54.054 "state": "completed" 00:12:54.054 }, 00:12:54.054 "cntlid": 63, 00:12:54.054 "listen_address": { 00:12:54.054 "adrfam": "IPv4", 00:12:54.054 "traddr": "10.0.0.2", 00:12:54.054 "trsvcid": "4420", 00:12:54.054 "trtype": "TCP" 00:12:54.054 }, 00:12:54.054 "peer_address": { 00:12:54.054 "adrfam": "IPv4", 00:12:54.054 "traddr": "10.0.0.1", 00:12:54.054 "trsvcid": "36686", 00:12:54.054 "trtype": "TCP" 00:12:54.054 }, 00:12:54.054 "qid": 0, 00:12:54.054 "state": "enabled", 00:12:54.054 "thread": "nvmf_tgt_poll_group_000" 00:12:54.054 } 00:12:54.054 ]' 00:12:54.054 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.054 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:54.054 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.314 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:54.314 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.314 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.314 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.314 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.575 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.508 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:55.509 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:55.509 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:55.509 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.509 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.509 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.509 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.509 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.509 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.509 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.074 00:12:56.074 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.074 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.074 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.331 { 00:12:56.331 "auth": { 00:12:56.331 "dhgroup": "ffdhe3072", 00:12:56.331 "digest": "sha384", 00:12:56.331 "state": "completed" 00:12:56.331 }, 00:12:56.331 "cntlid": 65, 00:12:56.331 "listen_address": { 00:12:56.331 "adrfam": "IPv4", 00:12:56.331 "traddr": "10.0.0.2", 00:12:56.331 "trsvcid": "4420", 00:12:56.331 "trtype": "TCP" 00:12:56.331 }, 00:12:56.331 "peer_address": { 00:12:56.331 "adrfam": "IPv4", 00:12:56.331 "traddr": "10.0.0.1", 00:12:56.331 "trsvcid": "36718", 00:12:56.331 "trtype": "TCP" 00:12:56.331 }, 00:12:56.331 "qid": 0, 00:12:56.331 "state": "enabled", 00:12:56.331 "thread": "nvmf_tgt_poll_group_000" 00:12:56.331 } 00:12:56.331 ]' 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.331 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.896 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:12:57.461 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.461 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:57.461 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.461 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.461 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.461 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.461 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:57.461 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.720 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.982 00:12:58.250 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.250 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.250 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.509 { 00:12:58.509 "auth": { 00:12:58.509 "dhgroup": "ffdhe3072", 00:12:58.509 "digest": "sha384", 00:12:58.509 "state": "completed" 00:12:58.509 }, 00:12:58.509 "cntlid": 67, 00:12:58.509 "listen_address": { 00:12:58.509 "adrfam": "IPv4", 00:12:58.509 "traddr": "10.0.0.2", 00:12:58.509 "trsvcid": "4420", 00:12:58.509 "trtype": "TCP" 00:12:58.509 }, 00:12:58.509 "peer_address": { 00:12:58.509 "adrfam": "IPv4", 00:12:58.509 "traddr": "10.0.0.1", 00:12:58.509 "trsvcid": "36738", 00:12:58.509 "trtype": "TCP" 00:12:58.509 }, 00:12:58.509 "qid": 0, 00:12:58.509 "state": "enabled", 00:12:58.509 "thread": "nvmf_tgt_poll_group_000" 00:12:58.509 } 00:12:58.509 ]' 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.509 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.766 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:12:59.699 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.699 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:12:59.699 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.699 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.699 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.699 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.699 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:59.699 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:59.957 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:59.957 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.957 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:59.958 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:59.958 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:59.958 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.958 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.958 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.958 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.958 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.958 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.958 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.216 00:13:00.474 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.474 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.474 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.731 { 00:13:00.731 "auth": { 00:13:00.731 "dhgroup": "ffdhe3072", 00:13:00.731 "digest": "sha384", 00:13:00.731 "state": "completed" 00:13:00.731 }, 00:13:00.731 "cntlid": 69, 00:13:00.731 "listen_address": { 00:13:00.731 "adrfam": "IPv4", 00:13:00.731 "traddr": "10.0.0.2", 00:13:00.731 "trsvcid": "4420", 00:13:00.731 "trtype": "TCP" 00:13:00.731 }, 00:13:00.731 "peer_address": { 00:13:00.731 "adrfam": "IPv4", 00:13:00.731 "traddr": "10.0.0.1", 00:13:00.731 "trsvcid": "36768", 00:13:00.731 "trtype": "TCP" 00:13:00.731 }, 00:13:00.731 "qid": 0, 00:13:00.731 "state": "enabled", 00:13:00.731 "thread": "nvmf_tgt_poll_group_000" 00:13:00.731 } 00:13:00.731 ]' 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.731 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.989 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:13:01.922 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.922 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:01.922 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.922 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.922 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.922 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.922 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:01.922 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:01.922 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:01.923 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.512 00:13:02.512 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.512 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.512 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.769 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.769 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.769 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.769 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.769 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.770 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.770 { 00:13:02.770 "auth": { 00:13:02.770 "dhgroup": "ffdhe3072", 00:13:02.770 "digest": "sha384", 00:13:02.770 "state": "completed" 00:13:02.770 }, 00:13:02.770 "cntlid": 71, 00:13:02.770 "listen_address": { 00:13:02.770 "adrfam": "IPv4", 00:13:02.770 "traddr": "10.0.0.2", 00:13:02.770 "trsvcid": "4420", 00:13:02.770 "trtype": "TCP" 00:13:02.770 }, 00:13:02.770 "peer_address": { 00:13:02.770 "adrfam": "IPv4", 00:13:02.770 "traddr": "10.0.0.1", 00:13:02.770 "trsvcid": "36790", 00:13:02.770 "trtype": "TCP" 00:13:02.770 }, 00:13:02.770 "qid": 0, 00:13:02.770 "state": "enabled", 00:13:02.770 "thread": "nvmf_tgt_poll_group_000" 00:13:02.770 } 00:13:02.770 ]' 00:13:02.770 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:02.770 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:02.770 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:02.770 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:02.770 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:02.770 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.770 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.770 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.026 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.959 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.217 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.217 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.217 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.474 00:13:04.474 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:04.474 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.474 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.040 { 00:13:05.040 "auth": { 00:13:05.040 "dhgroup": "ffdhe4096", 00:13:05.040 "digest": "sha384", 00:13:05.040 "state": "completed" 00:13:05.040 }, 00:13:05.040 "cntlid": 73, 00:13:05.040 "listen_address": { 00:13:05.040 "adrfam": "IPv4", 00:13:05.040 "traddr": "10.0.0.2", 00:13:05.040 "trsvcid": "4420", 00:13:05.040 "trtype": "TCP" 00:13:05.040 }, 00:13:05.040 "peer_address": { 00:13:05.040 "adrfam": "IPv4", 00:13:05.040 "traddr": "10.0.0.1", 00:13:05.040 "trsvcid": "43562", 00:13:05.040 "trtype": "TCP" 00:13:05.040 }, 00:13:05.040 "qid": 0, 00:13:05.040 "state": "enabled", 00:13:05.040 "thread": "nvmf_tgt_poll_group_000" 00:13:05.040 } 00:13:05.040 ]' 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.040 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.298 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:13:06.233 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.233 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:06.233 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.233 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.233 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.233 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.233 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:06.233 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:06.233 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.234 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.800 00:13:06.800 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.800 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.800 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.059 { 00:13:07.059 "auth": { 00:13:07.059 "dhgroup": "ffdhe4096", 00:13:07.059 "digest": "sha384", 00:13:07.059 "state": "completed" 00:13:07.059 }, 00:13:07.059 "cntlid": 75, 00:13:07.059 "listen_address": { 00:13:07.059 "adrfam": "IPv4", 00:13:07.059 "traddr": "10.0.0.2", 00:13:07.059 "trsvcid": "4420", 00:13:07.059 "trtype": "TCP" 00:13:07.059 }, 00:13:07.059 "peer_address": { 00:13:07.059 "adrfam": "IPv4", 00:13:07.059 "traddr": "10.0.0.1", 00:13:07.059 "trsvcid": "43590", 00:13:07.059 "trtype": "TCP" 00:13:07.059 }, 00:13:07.059 "qid": 0, 00:13:07.059 "state": "enabled", 00:13:07.059 "thread": "nvmf_tgt_poll_group_000" 00:13:07.059 } 00:13:07.059 ]' 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.059 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.624 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:13:08.188 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.188 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:08.188 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.188 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.188 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.188 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.188 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:08.188 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.446 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.010 00:13:09.010 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.010 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.010 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.267 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.267 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.267 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.267 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.267 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.267 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.267 { 00:13:09.267 "auth": { 00:13:09.267 "dhgroup": "ffdhe4096", 00:13:09.267 "digest": "sha384", 00:13:09.267 "state": "completed" 00:13:09.267 }, 00:13:09.267 "cntlid": 77, 00:13:09.267 "listen_address": { 00:13:09.267 "adrfam": "IPv4", 00:13:09.267 "traddr": "10.0.0.2", 00:13:09.267 "trsvcid": "4420", 00:13:09.267 "trtype": "TCP" 00:13:09.267 }, 00:13:09.267 "peer_address": { 00:13:09.267 "adrfam": "IPv4", 00:13:09.267 "traddr": "10.0.0.1", 00:13:09.267 "trsvcid": "43610", 00:13:09.267 "trtype": "TCP" 00:13:09.267 }, 00:13:09.267 "qid": 0, 00:13:09.267 "state": "enabled", 00:13:09.267 "thread": "nvmf_tgt_poll_group_000" 00:13:09.267 } 00:13:09.267 ]' 00:13:09.267 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.267 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.267 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.267 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:09.267 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.268 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.268 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.268 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.834 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:13:10.400 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.400 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:10.400 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.400 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.400 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.400 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.400 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:10.400 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.658 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:11.224 00:13:11.224 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.224 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.224 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.482 { 00:13:11.482 "auth": { 00:13:11.482 "dhgroup": "ffdhe4096", 00:13:11.482 "digest": "sha384", 00:13:11.482 "state": "completed" 00:13:11.482 }, 00:13:11.482 "cntlid": 79, 00:13:11.482 "listen_address": { 00:13:11.482 "adrfam": "IPv4", 00:13:11.482 "traddr": "10.0.0.2", 00:13:11.482 "trsvcid": "4420", 00:13:11.482 "trtype": "TCP" 00:13:11.482 }, 00:13:11.482 "peer_address": { 00:13:11.482 "adrfam": "IPv4", 00:13:11.482 "traddr": "10.0.0.1", 00:13:11.482 "trsvcid": "43636", 00:13:11.482 "trtype": "TCP" 00:13:11.482 }, 00:13:11.482 "qid": 0, 00:13:11.482 "state": "enabled", 00:13:11.482 "thread": "nvmf_tgt_poll_group_000" 00:13:11.482 } 00:13:11.482 ]' 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.482 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.740 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:13:12.676 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.676 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:12.676 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.676 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.676 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.676 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.676 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.676 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:12.676 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:12.933 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.934 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.191 00:13:13.191 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.191 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.191 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.756 { 00:13:13.756 "auth": { 00:13:13.756 "dhgroup": "ffdhe6144", 00:13:13.756 "digest": "sha384", 00:13:13.756 "state": "completed" 00:13:13.756 }, 00:13:13.756 "cntlid": 81, 00:13:13.756 "listen_address": { 00:13:13.756 "adrfam": "IPv4", 00:13:13.756 "traddr": "10.0.0.2", 00:13:13.756 "trsvcid": "4420", 00:13:13.756 "trtype": "TCP" 00:13:13.756 }, 00:13:13.756 "peer_address": { 00:13:13.756 "adrfam": "IPv4", 00:13:13.756 "traddr": "10.0.0.1", 00:13:13.756 "trsvcid": "43656", 00:13:13.756 "trtype": "TCP" 00:13:13.756 }, 00:13:13.756 "qid": 0, 00:13:13.756 "state": "enabled", 00:13:13.756 "thread": "nvmf_tgt_poll_group_000" 00:13:13.756 } 00:13:13.756 ]' 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.756 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.015 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:13:14.581 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.581 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:14.581 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.581 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.581 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.581 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.581 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:14.581 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.840 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.406 00:13:15.406 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.406 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.406 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.664 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.664 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.664 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.664 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.664 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.664 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.664 { 00:13:15.664 "auth": { 00:13:15.664 "dhgroup": "ffdhe6144", 00:13:15.664 "digest": "sha384", 00:13:15.664 "state": "completed" 00:13:15.664 }, 00:13:15.664 "cntlid": 83, 00:13:15.664 "listen_address": { 00:13:15.664 "adrfam": "IPv4", 00:13:15.664 "traddr": "10.0.0.2", 00:13:15.664 "trsvcid": "4420", 00:13:15.664 "trtype": "TCP" 00:13:15.664 }, 00:13:15.664 "peer_address": { 00:13:15.664 "adrfam": "IPv4", 00:13:15.664 "traddr": "10.0.0.1", 00:13:15.664 "trsvcid": "41962", 00:13:15.664 "trtype": "TCP" 00:13:15.664 }, 00:13:15.664 "qid": 0, 00:13:15.664 "state": "enabled", 00:13:15.664 "thread": "nvmf_tgt_poll_group_000" 00:13:15.664 } 00:13:15.664 ]' 00:13:15.664 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.922 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.922 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.922 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:15.922 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.922 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.922 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.922 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.181 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.116 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.682 00:13:17.682 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.682 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.682 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.940 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.940 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.940 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.940 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.940 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.940 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.940 { 00:13:17.940 "auth": { 00:13:17.940 "dhgroup": "ffdhe6144", 00:13:17.940 "digest": "sha384", 00:13:17.940 "state": "completed" 00:13:17.940 }, 00:13:17.940 "cntlid": 85, 00:13:17.940 "listen_address": { 00:13:17.940 "adrfam": "IPv4", 00:13:17.940 "traddr": "10.0.0.2", 00:13:17.940 "trsvcid": "4420", 00:13:17.940 "trtype": "TCP" 00:13:17.940 }, 00:13:17.940 "peer_address": { 00:13:17.940 "adrfam": "IPv4", 00:13:17.940 "traddr": "10.0.0.1", 00:13:17.940 "trsvcid": "41992", 00:13:17.940 "trtype": "TCP" 00:13:17.940 }, 00:13:17.940 "qid": 0, 00:13:17.940 "state": "enabled", 00:13:17.940 "thread": "nvmf_tgt_poll_group_000" 00:13:17.940 } 00:13:17.940 ]' 00:13:17.940 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:18.198 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:18.198 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:18.198 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:18.198 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:18.198 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.198 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.198 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.456 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:13:19.021 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.021 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:19.021 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.021 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.021 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.021 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:19.021 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:19.021 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:19.279 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:19.847 00:13:19.847 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.847 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.847 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.105 { 00:13:20.105 "auth": { 00:13:20.105 "dhgroup": "ffdhe6144", 00:13:20.105 "digest": "sha384", 00:13:20.105 "state": "completed" 00:13:20.105 }, 00:13:20.105 "cntlid": 87, 00:13:20.105 "listen_address": { 00:13:20.105 "adrfam": "IPv4", 00:13:20.105 "traddr": "10.0.0.2", 00:13:20.105 "trsvcid": "4420", 00:13:20.105 "trtype": "TCP" 00:13:20.105 }, 00:13:20.105 "peer_address": { 00:13:20.105 "adrfam": "IPv4", 00:13:20.105 "traddr": "10.0.0.1", 00:13:20.105 "trsvcid": "42022", 00:13:20.105 "trtype": "TCP" 00:13:20.105 }, 00:13:20.105 "qid": 0, 00:13:20.105 "state": "enabled", 00:13:20.105 "thread": "nvmf_tgt_poll_group_000" 00:13:20.105 } 00:13:20.105 ]' 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:20.105 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.363 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.363 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.363 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.621 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:13:21.187 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.187 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:21.187 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.187 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.187 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.187 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:21.187 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:21.187 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:21.187 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.753 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.320 00:13:22.320 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.320 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.320 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:22.581 { 00:13:22.581 "auth": { 00:13:22.581 "dhgroup": "ffdhe8192", 00:13:22.581 "digest": "sha384", 00:13:22.581 "state": "completed" 00:13:22.581 }, 00:13:22.581 "cntlid": 89, 00:13:22.581 "listen_address": { 00:13:22.581 "adrfam": "IPv4", 00:13:22.581 "traddr": "10.0.0.2", 00:13:22.581 "trsvcid": "4420", 00:13:22.581 "trtype": "TCP" 00:13:22.581 }, 00:13:22.581 "peer_address": { 00:13:22.581 "adrfam": "IPv4", 00:13:22.581 "traddr": "10.0.0.1", 00:13:22.581 "trsvcid": "42038", 00:13:22.581 "trtype": "TCP" 00:13:22.581 }, 00:13:22.581 "qid": 0, 00:13:22.581 "state": "enabled", 00:13:22.581 "thread": "nvmf_tgt_poll_group_000" 00:13:22.581 } 00:13:22.581 ]' 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.581 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.842 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:13:23.777 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.777 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:23.778 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.778 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.778 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.778 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.778 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:23.778 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.036 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.603 00:13:24.603 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.603 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.603 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.861 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.861 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.861 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.861 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.861 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.861 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.861 { 00:13:24.861 "auth": { 00:13:24.861 "dhgroup": "ffdhe8192", 00:13:24.861 "digest": "sha384", 00:13:24.861 "state": "completed" 00:13:24.861 }, 00:13:24.861 "cntlid": 91, 00:13:24.861 "listen_address": { 00:13:24.861 "adrfam": "IPv4", 00:13:24.861 "traddr": "10.0.0.2", 00:13:24.861 "trsvcid": "4420", 00:13:24.861 "trtype": "TCP" 00:13:24.861 }, 00:13:24.861 "peer_address": { 00:13:24.861 "adrfam": "IPv4", 00:13:24.861 "traddr": "10.0.0.1", 00:13:24.861 "trsvcid": "36238", 00:13:24.861 "trtype": "TCP" 00:13:24.861 }, 00:13:24.861 "qid": 0, 00:13:24.861 "state": "enabled", 00:13:24.861 "thread": "nvmf_tgt_poll_group_000" 00:13:24.861 } 00:13:24.861 ]' 00:13:24.861 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:25.119 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:25.119 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:25.119 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:25.120 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:25.120 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.120 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.120 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.378 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:13:26.314 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.314 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:26.314 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.314 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.314 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.314 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:26.314 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:26.314 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.314 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.882 00:13:27.140 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.140 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.140 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:27.399 { 00:13:27.399 "auth": { 00:13:27.399 "dhgroup": "ffdhe8192", 00:13:27.399 "digest": "sha384", 00:13:27.399 "state": "completed" 00:13:27.399 }, 00:13:27.399 "cntlid": 93, 00:13:27.399 "listen_address": { 00:13:27.399 "adrfam": "IPv4", 00:13:27.399 "traddr": "10.0.0.2", 00:13:27.399 "trsvcid": "4420", 00:13:27.399 "trtype": "TCP" 00:13:27.399 }, 00:13:27.399 "peer_address": { 00:13:27.399 "adrfam": "IPv4", 00:13:27.399 "traddr": "10.0.0.1", 00:13:27.399 "trsvcid": "36260", 00:13:27.399 "trtype": "TCP" 00:13:27.399 }, 00:13:27.399 "qid": 0, 00:13:27.399 "state": "enabled", 00:13:27.399 "thread": "nvmf_tgt_poll_group_000" 00:13:27.399 } 00:13:27.399 ]' 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.399 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.702 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:13:28.307 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.307 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:28.307 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.307 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.307 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.307 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:28.307 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:28.307 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.567 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:29.503 00:13:29.503 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.503 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.503 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.503 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.503 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.503 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.503 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.503 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.503 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.503 { 00:13:29.503 "auth": { 00:13:29.503 "dhgroup": "ffdhe8192", 00:13:29.503 "digest": "sha384", 00:13:29.503 "state": "completed" 00:13:29.503 }, 00:13:29.503 "cntlid": 95, 00:13:29.503 "listen_address": { 00:13:29.503 "adrfam": "IPv4", 00:13:29.503 "traddr": "10.0.0.2", 00:13:29.503 "trsvcid": "4420", 00:13:29.503 "trtype": "TCP" 00:13:29.503 }, 00:13:29.503 "peer_address": { 00:13:29.503 "adrfam": "IPv4", 00:13:29.503 "traddr": "10.0.0.1", 00:13:29.503 "trsvcid": "36294", 00:13:29.503 "trtype": "TCP" 00:13:29.503 }, 00:13:29.503 "qid": 0, 00:13:29.503 "state": "enabled", 00:13:29.503 "thread": "nvmf_tgt_poll_group_000" 00:13:29.503 } 00:13:29.503 ]' 00:13:29.503 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.761 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:29.761 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.761 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.761 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.761 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.761 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.761 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.020 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.956 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.215 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.215 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.215 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.473 00:13:31.473 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.473 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.473 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.732 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.733 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.733 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.733 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.733 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.733 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:31.733 { 00:13:31.733 "auth": { 00:13:31.733 "dhgroup": "null", 00:13:31.733 "digest": "sha512", 00:13:31.733 "state": "completed" 00:13:31.733 }, 00:13:31.733 "cntlid": 97, 00:13:31.733 "listen_address": { 00:13:31.733 "adrfam": "IPv4", 00:13:31.733 "traddr": "10.0.0.2", 00:13:31.733 "trsvcid": "4420", 00:13:31.733 "trtype": "TCP" 00:13:31.733 }, 00:13:31.733 "peer_address": { 00:13:31.733 "adrfam": "IPv4", 00:13:31.733 "traddr": "10.0.0.1", 00:13:31.733 "trsvcid": "36324", 00:13:31.733 "trtype": "TCP" 00:13:31.733 }, 00:13:31.733 "qid": 0, 00:13:31.733 "state": "enabled", 00:13:31.733 "thread": "nvmf_tgt_poll_group_000" 00:13:31.733 } 00:13:31.733 ]' 00:13:31.733 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.733 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.733 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:31.733 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:31.733 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:31.992 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.992 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.992 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.250 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:13:32.841 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.841 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:32.841 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.841 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.841 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.841 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:32.841 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:32.841 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.100 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.667 00:13:33.667 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:33.667 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:33.667 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.667 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.667 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.667 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.667 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.993 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.993 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:33.993 { 00:13:33.993 "auth": { 00:13:33.993 "dhgroup": "null", 00:13:33.993 "digest": "sha512", 00:13:33.993 "state": "completed" 00:13:33.993 }, 00:13:33.993 "cntlid": 99, 00:13:33.993 "listen_address": { 00:13:33.993 "adrfam": "IPv4", 00:13:33.993 "traddr": "10.0.0.2", 00:13:33.993 "trsvcid": "4420", 00:13:33.993 "trtype": "TCP" 00:13:33.993 }, 00:13:33.993 "peer_address": { 00:13:33.993 "adrfam": "IPv4", 00:13:33.993 "traddr": "10.0.0.1", 00:13:33.993 "trsvcid": "39840", 00:13:33.993 "trtype": "TCP" 00:13:33.993 }, 00:13:33.993 "qid": 0, 00:13:33.993 "state": "enabled", 00:13:33.993 "thread": "nvmf_tgt_poll_group_000" 00:13:33.993 } 00:13:33.993 ]' 00:13:33.993 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:33.993 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.993 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.993 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:33.993 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:33.993 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.993 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.993 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.251 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:13:34.818 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.818 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:34.818 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.818 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.818 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.818 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:34.818 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:34.818 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:35.076 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:13:35.077 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.077 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:35.077 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:35.077 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:35.077 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.077 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.077 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.077 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.335 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.335 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.335 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.594 00:13:35.594 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:35.594 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:35.594 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:35.853 { 00:13:35.853 "auth": { 00:13:35.853 "dhgroup": "null", 00:13:35.853 "digest": "sha512", 00:13:35.853 "state": "completed" 00:13:35.853 }, 00:13:35.853 "cntlid": 101, 00:13:35.853 "listen_address": { 00:13:35.853 "adrfam": "IPv4", 00:13:35.853 "traddr": "10.0.0.2", 00:13:35.853 "trsvcid": "4420", 00:13:35.853 "trtype": "TCP" 00:13:35.853 }, 00:13:35.853 "peer_address": { 00:13:35.853 "adrfam": "IPv4", 00:13:35.853 "traddr": "10.0.0.1", 00:13:35.853 "trsvcid": "39870", 00:13:35.853 "trtype": "TCP" 00:13:35.853 }, 00:13:35.853 "qid": 0, 00:13:35.853 "state": "enabled", 00:13:35.853 "thread": "nvmf_tgt_poll_group_000" 00:13:35.853 } 00:13:35.853 ]' 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:35.853 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.110 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.110 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.110 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.367 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:13:36.955 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.955 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:36.955 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.955 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.955 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.955 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.955 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:36.955 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:37.213 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:13:37.213 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.213 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:37.213 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:37.213 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:37.214 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.214 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:13:37.214 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.214 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.214 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.214 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:37.214 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:37.780 00:13:37.780 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.780 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.780 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.039 { 00:13:38.039 "auth": { 00:13:38.039 "dhgroup": "null", 00:13:38.039 "digest": "sha512", 00:13:38.039 "state": "completed" 00:13:38.039 }, 00:13:38.039 "cntlid": 103, 00:13:38.039 "listen_address": { 00:13:38.039 "adrfam": "IPv4", 00:13:38.039 "traddr": "10.0.0.2", 00:13:38.039 "trsvcid": "4420", 00:13:38.039 "trtype": "TCP" 00:13:38.039 }, 00:13:38.039 "peer_address": { 00:13:38.039 "adrfam": "IPv4", 00:13:38.039 "traddr": "10.0.0.1", 00:13:38.039 "trsvcid": "39888", 00:13:38.039 "trtype": "TCP" 00:13:38.039 }, 00:13:38.039 "qid": 0, 00:13:38.039 "state": "enabled", 00:13:38.039 "thread": "nvmf_tgt_poll_group_000" 00:13:38.039 } 00:13:38.039 ]' 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.039 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.298 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:13:39.232 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.232 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:39.232 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.232 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.232 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.232 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:39.232 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:39.232 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:39.232 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.492 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.751 00:13:39.751 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.751 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.751 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:40.010 { 00:13:40.010 "auth": { 00:13:40.010 "dhgroup": "ffdhe2048", 00:13:40.010 "digest": "sha512", 00:13:40.010 "state": "completed" 00:13:40.010 }, 00:13:40.010 "cntlid": 105, 00:13:40.010 "listen_address": { 00:13:40.010 "adrfam": "IPv4", 00:13:40.010 "traddr": "10.0.0.2", 00:13:40.010 "trsvcid": "4420", 00:13:40.010 "trtype": "TCP" 00:13:40.010 }, 00:13:40.010 "peer_address": { 00:13:40.010 "adrfam": "IPv4", 00:13:40.010 "traddr": "10.0.0.1", 00:13:40.010 "trsvcid": "39906", 00:13:40.010 "trtype": "TCP" 00:13:40.010 }, 00:13:40.010 "qid": 0, 00:13:40.010 "state": "enabled", 00:13:40.010 "thread": "nvmf_tgt_poll_group_000" 00:13:40.010 } 00:13:40.010 ]' 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:40.010 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:40.268 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.268 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.268 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.527 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:13:41.107 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.107 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:41.107 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.107 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.107 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.107 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:41.107 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:41.107 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.675 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.934 00:13:41.934 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.934 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.934 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:42.193 { 00:13:42.193 "auth": { 00:13:42.193 "dhgroup": "ffdhe2048", 00:13:42.193 "digest": "sha512", 00:13:42.193 "state": "completed" 00:13:42.193 }, 00:13:42.193 "cntlid": 107, 00:13:42.193 "listen_address": { 00:13:42.193 "adrfam": "IPv4", 00:13:42.193 "traddr": "10.0.0.2", 00:13:42.193 "trsvcid": "4420", 00:13:42.193 "trtype": "TCP" 00:13:42.193 }, 00:13:42.193 "peer_address": { 00:13:42.193 "adrfam": "IPv4", 00:13:42.193 "traddr": "10.0.0.1", 00:13:42.193 "trsvcid": "39938", 00:13:42.193 "trtype": "TCP" 00:13:42.193 }, 00:13:42.193 "qid": 0, 00:13:42.193 "state": "enabled", 00:13:42.193 "thread": "nvmf_tgt_poll_group_000" 00:13:42.193 } 00:13:42.193 ]' 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:42.193 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:42.193 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.193 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.193 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.761 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:13:43.329 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.329 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:43.329 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.329 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.329 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.329 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:43.329 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:43.329 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.588 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.847 00:13:44.105 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:44.105 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:44.105 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:44.364 { 00:13:44.364 "auth": { 00:13:44.364 "dhgroup": "ffdhe2048", 00:13:44.364 "digest": "sha512", 00:13:44.364 "state": "completed" 00:13:44.364 }, 00:13:44.364 "cntlid": 109, 00:13:44.364 "listen_address": { 00:13:44.364 "adrfam": "IPv4", 00:13:44.364 "traddr": "10.0.0.2", 00:13:44.364 "trsvcid": "4420", 00:13:44.364 "trtype": "TCP" 00:13:44.364 }, 00:13:44.364 "peer_address": { 00:13:44.364 "adrfam": "IPv4", 00:13:44.364 "traddr": "10.0.0.1", 00:13:44.364 "trsvcid": "45362", 00:13:44.364 "trtype": "TCP" 00:13:44.364 }, 00:13:44.364 "qid": 0, 00:13:44.364 "state": "enabled", 00:13:44.364 "thread": "nvmf_tgt_poll_group_000" 00:13:44.364 } 00:13:44.364 ]' 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.364 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.622 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:45.557 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:46.125 00:13:46.125 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.125 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.125 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.383 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.383 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.383 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.383 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.383 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.383 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.383 { 00:13:46.383 "auth": { 00:13:46.383 "dhgroup": "ffdhe2048", 00:13:46.383 "digest": "sha512", 00:13:46.383 "state": "completed" 00:13:46.383 }, 00:13:46.383 "cntlid": 111, 00:13:46.383 "listen_address": { 00:13:46.383 "adrfam": "IPv4", 00:13:46.383 "traddr": "10.0.0.2", 00:13:46.383 "trsvcid": "4420", 00:13:46.383 "trtype": "TCP" 00:13:46.383 }, 00:13:46.383 "peer_address": { 00:13:46.383 "adrfam": "IPv4", 00:13:46.383 "traddr": "10.0.0.1", 00:13:46.383 "trsvcid": "45392", 00:13:46.383 "trtype": "TCP" 00:13:46.383 }, 00:13:46.383 "qid": 0, 00:13:46.383 "state": "enabled", 00:13:46.384 "thread": "nvmf_tgt_poll_group_000" 00:13:46.384 } 00:13:46.384 ]' 00:13:46.384 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.384 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.384 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.384 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:46.384 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.384 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.384 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.384 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.642 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:13:47.576 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.576 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:47.576 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.576 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.576 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.576 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.576 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.577 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:47.577 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.835 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.093 00:13:48.093 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.093 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.093 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.352 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.352 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.352 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.352 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.352 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.352 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:48.352 { 00:13:48.352 "auth": { 00:13:48.352 "dhgroup": "ffdhe3072", 00:13:48.352 "digest": "sha512", 00:13:48.352 "state": "completed" 00:13:48.352 }, 00:13:48.352 "cntlid": 113, 00:13:48.352 "listen_address": { 00:13:48.352 "adrfam": "IPv4", 00:13:48.352 "traddr": "10.0.0.2", 00:13:48.352 "trsvcid": "4420", 00:13:48.352 "trtype": "TCP" 00:13:48.352 }, 00:13:48.352 "peer_address": { 00:13:48.352 "adrfam": "IPv4", 00:13:48.352 "traddr": "10.0.0.1", 00:13:48.352 "trsvcid": "45424", 00:13:48.352 "trtype": "TCP" 00:13:48.352 }, 00:13:48.352 "qid": 0, 00:13:48.352 "state": "enabled", 00:13:48.352 "thread": "nvmf_tgt_poll_group_000" 00:13:48.352 } 00:13:48.352 ]' 00:13:48.352 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:48.352 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:48.352 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.610 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:48.610 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.610 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.610 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.610 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.869 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:13:49.463 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.463 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:49.463 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.463 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.463 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.464 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.464 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:49.464 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.028 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.287 00:13:50.287 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.287 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.287 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:50.544 { 00:13:50.544 "auth": { 00:13:50.544 "dhgroup": "ffdhe3072", 00:13:50.544 "digest": "sha512", 00:13:50.544 "state": "completed" 00:13:50.544 }, 00:13:50.544 "cntlid": 115, 00:13:50.544 "listen_address": { 00:13:50.544 "adrfam": "IPv4", 00:13:50.544 "traddr": "10.0.0.2", 00:13:50.544 "trsvcid": "4420", 00:13:50.544 "trtype": "TCP" 00:13:50.544 }, 00:13:50.544 "peer_address": { 00:13:50.544 "adrfam": "IPv4", 00:13:50.544 "traddr": "10.0.0.1", 00:13:50.544 "trsvcid": "45450", 00:13:50.544 "trtype": "TCP" 00:13:50.544 }, 00:13:50.544 "qid": 0, 00:13:50.544 "state": "enabled", 00:13:50.544 "thread": "nvmf_tgt_poll_group_000" 00:13:50.544 } 00:13:50.544 ]' 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:50.544 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.802 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.802 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.802 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.060 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:13:51.625 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.625 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:51.625 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.625 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.625 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.625 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.625 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:51.625 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.883 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.449 00:13:52.449 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.449 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.449 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.707 { 00:13:52.707 "auth": { 00:13:52.707 "dhgroup": "ffdhe3072", 00:13:52.707 "digest": "sha512", 00:13:52.707 "state": "completed" 00:13:52.707 }, 00:13:52.707 "cntlid": 117, 00:13:52.707 "listen_address": { 00:13:52.707 "adrfam": "IPv4", 00:13:52.707 "traddr": "10.0.0.2", 00:13:52.707 "trsvcid": "4420", 00:13:52.707 "trtype": "TCP" 00:13:52.707 }, 00:13:52.707 "peer_address": { 00:13:52.707 "adrfam": "IPv4", 00:13:52.707 "traddr": "10.0.0.1", 00:13:52.707 "trsvcid": "45460", 00:13:52.707 "trtype": "TCP" 00:13:52.707 }, 00:13:52.707 "qid": 0, 00:13:52.707 "state": "enabled", 00:13:52.707 "thread": "nvmf_tgt_poll_group_000" 00:13:52.707 } 00:13:52.707 ]' 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:52.707 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.708 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.708 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.708 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.274 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:13:53.841 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.841 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:53.841 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.841 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.841 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.841 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:53.841 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:53.841 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:54.100 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:54.359 00:13:54.616 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.616 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.616 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:54.875 { 00:13:54.875 "auth": { 00:13:54.875 "dhgroup": "ffdhe3072", 00:13:54.875 "digest": "sha512", 00:13:54.875 "state": "completed" 00:13:54.875 }, 00:13:54.875 "cntlid": 119, 00:13:54.875 "listen_address": { 00:13:54.875 "adrfam": "IPv4", 00:13:54.875 "traddr": "10.0.0.2", 00:13:54.875 "trsvcid": "4420", 00:13:54.875 "trtype": "TCP" 00:13:54.875 }, 00:13:54.875 "peer_address": { 00:13:54.875 "adrfam": "IPv4", 00:13:54.875 "traddr": "10.0.0.1", 00:13:54.875 "trsvcid": "45888", 00:13:54.875 "trtype": "TCP" 00:13:54.875 }, 00:13:54.875 "qid": 0, 00:13:54.875 "state": "enabled", 00:13:54.875 "thread": "nvmf_tgt_poll_group_000" 00:13:54.875 } 00:13:54.875 ]' 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.875 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.132 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:13:56.066 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.066 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:56.066 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.066 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.066 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.066 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:56.066 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.066 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:56.066 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.366 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.635 00:13:56.635 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.635 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.635 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.892 { 00:13:56.892 "auth": { 00:13:56.892 "dhgroup": "ffdhe4096", 00:13:56.892 "digest": "sha512", 00:13:56.892 "state": "completed" 00:13:56.892 }, 00:13:56.892 "cntlid": 121, 00:13:56.892 "listen_address": { 00:13:56.892 "adrfam": "IPv4", 00:13:56.892 "traddr": "10.0.0.2", 00:13:56.892 "trsvcid": "4420", 00:13:56.892 "trtype": "TCP" 00:13:56.892 }, 00:13:56.892 "peer_address": { 00:13:56.892 "adrfam": "IPv4", 00:13:56.892 "traddr": "10.0.0.1", 00:13:56.892 "trsvcid": "45906", 00:13:56.892 "trtype": "TCP" 00:13:56.892 }, 00:13:56.892 "qid": 0, 00:13:56.892 "state": "enabled", 00:13:56.892 "thread": "nvmf_tgt_poll_group_000" 00:13:56.892 } 00:13:56.892 ]' 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:56.892 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.149 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.149 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.149 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.406 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:13:57.971 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.971 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:13:57.971 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.971 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.971 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.971 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.971 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:57.971 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.230 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.488 00:13:58.746 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.747 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.747 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.747 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.747 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.747 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.747 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.005 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.005 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.005 { 00:13:59.005 "auth": { 00:13:59.005 "dhgroup": "ffdhe4096", 00:13:59.005 "digest": "sha512", 00:13:59.005 "state": "completed" 00:13:59.005 }, 00:13:59.005 "cntlid": 123, 00:13:59.005 "listen_address": { 00:13:59.005 "adrfam": "IPv4", 00:13:59.005 "traddr": "10.0.0.2", 00:13:59.005 "trsvcid": "4420", 00:13:59.005 "trtype": "TCP" 00:13:59.005 }, 00:13:59.005 "peer_address": { 00:13:59.005 "adrfam": "IPv4", 00:13:59.005 "traddr": "10.0.0.1", 00:13:59.005 "trsvcid": "45916", 00:13:59.005 "trtype": "TCP" 00:13:59.005 }, 00:13:59.005 "qid": 0, 00:13:59.005 "state": "enabled", 00:13:59.005 "thread": "nvmf_tgt_poll_group_000" 00:13:59.005 } 00:13:59.005 ]' 00:13:59.005 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.005 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.005 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.005 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:59.005 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.005 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.005 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.005 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.263 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:14:00.211 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.211 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:00.211 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.211 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.211 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.211 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.211 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:00.211 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.492 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.751 00:14:00.751 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.751 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.751 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.009 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.009 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.009 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.009 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.009 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.009 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.009 { 00:14:01.009 "auth": { 00:14:01.009 "dhgroup": "ffdhe4096", 00:14:01.009 "digest": "sha512", 00:14:01.009 "state": "completed" 00:14:01.009 }, 00:14:01.009 "cntlid": 125, 00:14:01.009 "listen_address": { 00:14:01.009 "adrfam": "IPv4", 00:14:01.009 "traddr": "10.0.0.2", 00:14:01.009 "trsvcid": "4420", 00:14:01.009 "trtype": "TCP" 00:14:01.009 }, 00:14:01.009 "peer_address": { 00:14:01.009 "adrfam": "IPv4", 00:14:01.009 "traddr": "10.0.0.1", 00:14:01.009 "trsvcid": "45942", 00:14:01.009 "trtype": "TCP" 00:14:01.009 }, 00:14:01.009 "qid": 0, 00:14:01.009 "state": "enabled", 00:14:01.009 "thread": "nvmf_tgt_poll_group_000" 00:14:01.009 } 00:14:01.009 ]' 00:14:01.009 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.268 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.268 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.268 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:01.268 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.268 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.268 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.268 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.527 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:14:02.095 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.095 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:02.095 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.095 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.353 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.353 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.353 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:02.353 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:02.611 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:14:02.611 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.611 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:02.611 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:02.611 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:02.611 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.611 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:14:02.612 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.612 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.612 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.612 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.612 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.869 00:14:02.869 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.869 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.869 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.127 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.127 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.127 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.127 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.127 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.127 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.127 { 00:14:03.127 "auth": { 00:14:03.127 "dhgroup": "ffdhe4096", 00:14:03.127 "digest": "sha512", 00:14:03.127 "state": "completed" 00:14:03.127 }, 00:14:03.127 "cntlid": 127, 00:14:03.127 "listen_address": { 00:14:03.127 "adrfam": "IPv4", 00:14:03.127 "traddr": "10.0.0.2", 00:14:03.127 "trsvcid": "4420", 00:14:03.127 "trtype": "TCP" 00:14:03.127 }, 00:14:03.127 "peer_address": { 00:14:03.127 "adrfam": "IPv4", 00:14:03.127 "traddr": "10.0.0.1", 00:14:03.127 "trsvcid": "45970", 00:14:03.127 "trtype": "TCP" 00:14:03.127 }, 00:14:03.127 "qid": 0, 00:14:03.127 "state": "enabled", 00:14:03.127 "thread": "nvmf_tgt_poll_group_000" 00:14:03.127 } 00:14:03.127 ]' 00:14:03.127 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.127 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.127 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.385 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:03.385 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.385 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.385 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.385 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.644 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:14:04.210 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.210 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:04.210 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.210 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.210 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.210 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.211 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.211 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:04.211 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.470 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.039 00:14:05.039 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.039 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.039 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.297 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.297 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.297 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.297 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.297 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.297 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.297 { 00:14:05.297 "auth": { 00:14:05.297 "dhgroup": "ffdhe6144", 00:14:05.297 "digest": "sha512", 00:14:05.297 "state": "completed" 00:14:05.297 }, 00:14:05.297 "cntlid": 129, 00:14:05.297 "listen_address": { 00:14:05.297 "adrfam": "IPv4", 00:14:05.297 "traddr": "10.0.0.2", 00:14:05.297 "trsvcid": "4420", 00:14:05.297 "trtype": "TCP" 00:14:05.297 }, 00:14:05.297 "peer_address": { 00:14:05.297 "adrfam": "IPv4", 00:14:05.297 "traddr": "10.0.0.1", 00:14:05.297 "trsvcid": "48624", 00:14:05.297 "trtype": "TCP" 00:14:05.297 }, 00:14:05.297 "qid": 0, 00:14:05.297 "state": "enabled", 00:14:05.297 "thread": "nvmf_tgt_poll_group_000" 00:14:05.297 } 00:14:05.297 ]' 00:14:05.297 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.298 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.298 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.298 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:05.298 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.556 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.556 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.556 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.815 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:14:06.380 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.380 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:06.380 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.380 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.380 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.380 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.380 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:06.380 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.638 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.204 00:14:07.204 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.204 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.204 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.462 { 00:14:07.462 "auth": { 00:14:07.462 "dhgroup": "ffdhe6144", 00:14:07.462 "digest": "sha512", 00:14:07.462 "state": "completed" 00:14:07.462 }, 00:14:07.462 "cntlid": 131, 00:14:07.462 "listen_address": { 00:14:07.462 "adrfam": "IPv4", 00:14:07.462 "traddr": "10.0.0.2", 00:14:07.462 "trsvcid": "4420", 00:14:07.462 "trtype": "TCP" 00:14:07.462 }, 00:14:07.462 "peer_address": { 00:14:07.462 "adrfam": "IPv4", 00:14:07.462 "traddr": "10.0.0.1", 00:14:07.462 "trsvcid": "48638", 00:14:07.462 "trtype": "TCP" 00:14:07.462 }, 00:14:07.462 "qid": 0, 00:14:07.462 "state": "enabled", 00:14:07.462 "thread": "nvmf_tgt_poll_group_000" 00:14:07.462 } 00:14:07.462 ]' 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.462 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.720 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.653 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.217 00:14:09.217 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.217 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.217 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.474 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.474 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.474 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.474 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.474 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.474 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.474 { 00:14:09.474 "auth": { 00:14:09.474 "dhgroup": "ffdhe6144", 00:14:09.474 "digest": "sha512", 00:14:09.474 "state": "completed" 00:14:09.474 }, 00:14:09.474 "cntlid": 133, 00:14:09.474 "listen_address": { 00:14:09.474 "adrfam": "IPv4", 00:14:09.474 "traddr": "10.0.0.2", 00:14:09.474 "trsvcid": "4420", 00:14:09.474 "trtype": "TCP" 00:14:09.474 }, 00:14:09.474 "peer_address": { 00:14:09.474 "adrfam": "IPv4", 00:14:09.474 "traddr": "10.0.0.1", 00:14:09.474 "trsvcid": "48652", 00:14:09.474 "trtype": "TCP" 00:14:09.474 }, 00:14:09.474 "qid": 0, 00:14:09.474 "state": "enabled", 00:14:09.474 "thread": "nvmf_tgt_poll_group_000" 00:14:09.474 } 00:14:09.474 ]' 00:14:09.474 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.474 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:09.474 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.731 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:09.731 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.731 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.731 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.731 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.989 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:14:10.553 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.553 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:10.553 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.553 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.811 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.811 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.811 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:10.811 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:11.069 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:11.328 00:14:11.328 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.328 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.328 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.586 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.586 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.586 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.586 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.845 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.845 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.845 { 00:14:11.845 "auth": { 00:14:11.845 "dhgroup": "ffdhe6144", 00:14:11.845 "digest": "sha512", 00:14:11.845 "state": "completed" 00:14:11.845 }, 00:14:11.845 "cntlid": 135, 00:14:11.845 "listen_address": { 00:14:11.845 "adrfam": "IPv4", 00:14:11.845 "traddr": "10.0.0.2", 00:14:11.845 "trsvcid": "4420", 00:14:11.845 "trtype": "TCP" 00:14:11.845 }, 00:14:11.845 "peer_address": { 00:14:11.845 "adrfam": "IPv4", 00:14:11.845 "traddr": "10.0.0.1", 00:14:11.845 "trsvcid": "48678", 00:14:11.845 "trtype": "TCP" 00:14:11.845 }, 00:14:11.845 "qid": 0, 00:14:11.845 "state": "enabled", 00:14:11.845 "thread": "nvmf_tgt_poll_group_000" 00:14:11.845 } 00:14:11.845 ]' 00:14:11.845 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.845 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.845 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.845 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:11.845 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.845 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.845 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.845 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.103 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:14:13.040 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.040 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:13.040 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.040 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.040 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.040 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:13.040 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.040 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:13.040 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.299 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.867 00:14:13.867 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.867 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.867 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.125 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.125 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.125 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.125 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.125 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.125 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.125 { 00:14:14.125 "auth": { 00:14:14.125 "dhgroup": "ffdhe8192", 00:14:14.125 "digest": "sha512", 00:14:14.125 "state": "completed" 00:14:14.125 }, 00:14:14.125 "cntlid": 137, 00:14:14.125 "listen_address": { 00:14:14.125 "adrfam": "IPv4", 00:14:14.125 "traddr": "10.0.0.2", 00:14:14.125 "trsvcid": "4420", 00:14:14.125 "trtype": "TCP" 00:14:14.125 }, 00:14:14.125 "peer_address": { 00:14:14.125 "adrfam": "IPv4", 00:14:14.125 "traddr": "10.0.0.1", 00:14:14.125 "trsvcid": "60416", 00:14:14.125 "trtype": "TCP" 00:14:14.125 }, 00:14:14.125 "qid": 0, 00:14:14.125 "state": "enabled", 00:14:14.125 "thread": "nvmf_tgt_poll_group_000" 00:14:14.125 } 00:14:14.125 ]' 00:14:14.125 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.125 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:14.125 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.384 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:14.384 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.384 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.384 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.384 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.642 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:14:15.209 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.467 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:15.467 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.467 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.467 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.467 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.467 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:15.467 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.725 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.291 00:14:16.291 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.291 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.291 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.549 { 00:14:16.549 "auth": { 00:14:16.549 "dhgroup": "ffdhe8192", 00:14:16.549 "digest": "sha512", 00:14:16.549 "state": "completed" 00:14:16.549 }, 00:14:16.549 "cntlid": 139, 00:14:16.549 "listen_address": { 00:14:16.549 "adrfam": "IPv4", 00:14:16.549 "traddr": "10.0.0.2", 00:14:16.549 "trsvcid": "4420", 00:14:16.549 "trtype": "TCP" 00:14:16.549 }, 00:14:16.549 "peer_address": { 00:14:16.549 "adrfam": "IPv4", 00:14:16.549 "traddr": "10.0.0.1", 00:14:16.549 "trsvcid": "60444", 00:14:16.549 "trtype": "TCP" 00:14:16.549 }, 00:14:16.549 "qid": 0, 00:14:16.549 "state": "enabled", 00:14:16.549 "thread": "nvmf_tgt_poll_group_000" 00:14:16.549 } 00:14:16.549 ]' 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:16.549 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.807 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.807 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.807 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.065 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:01:NzRhNThkY2E4MDc1NGUwZmQ4MjIxMGQyMTU4YzI3ZjOLptFw: --dhchap-ctrl-secret DHHC-1:02:YzEzZTM4YzM3OTkzY2RlMzA5YjBlNDYxYTMwODY5NDg4NDc5ZGY3ZmRkMTE4NmJk3yZdyw==: 00:14:17.656 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.656 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:17.656 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.656 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.656 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.656 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.656 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:17.656 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.914 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.481 00:14:18.481 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.481 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.481 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.740 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.740 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.740 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.740 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.740 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.740 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.740 { 00:14:18.740 "auth": { 00:14:18.740 "dhgroup": "ffdhe8192", 00:14:18.740 "digest": "sha512", 00:14:18.740 "state": "completed" 00:14:18.740 }, 00:14:18.740 "cntlid": 141, 00:14:18.740 "listen_address": { 00:14:18.740 "adrfam": "IPv4", 00:14:18.740 "traddr": "10.0.0.2", 00:14:18.740 "trsvcid": "4420", 00:14:18.740 "trtype": "TCP" 00:14:18.740 }, 00:14:18.740 "peer_address": { 00:14:18.740 "adrfam": "IPv4", 00:14:18.740 "traddr": "10.0.0.1", 00:14:18.740 "trsvcid": "60476", 00:14:18.740 "trtype": "TCP" 00:14:18.740 }, 00:14:18.740 "qid": 0, 00:14:18.740 "state": "enabled", 00:14:18.740 "thread": "nvmf_tgt_poll_group_000" 00:14:18.740 } 00:14:18.740 ]' 00:14:18.740 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.740 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:18.740 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.999 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:18.999 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.999 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.999 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.999 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.257 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:02:OWI2ZDA4Njc3MjcxMzFkYjY2ZTViMzA1MGU1NTE0MzNiMTgwNzc5ZGY1OGJmYjk1/TWFVQ==: --dhchap-ctrl-secret DHHC-1:01:NjQwY2I1ZjY0ODYwYmU5NzY3ZmNjZDBiMjI1ZTQwZjc1ZuuP: 00:14:19.824 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.824 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:19.824 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.824 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.824 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.824 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.824 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:19.824 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:20.084 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.022 00:14:21.022 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.022 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.022 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.022 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.022 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.022 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.022 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.022 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.022 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.022 { 00:14:21.022 "auth": { 00:14:21.022 "dhgroup": "ffdhe8192", 00:14:21.022 "digest": "sha512", 00:14:21.022 "state": "completed" 00:14:21.022 }, 00:14:21.022 "cntlid": 143, 00:14:21.022 "listen_address": { 00:14:21.022 "adrfam": "IPv4", 00:14:21.022 "traddr": "10.0.0.2", 00:14:21.022 "trsvcid": "4420", 00:14:21.022 "trtype": "TCP" 00:14:21.022 }, 00:14:21.022 "peer_address": { 00:14:21.022 "adrfam": "IPv4", 00:14:21.023 "traddr": "10.0.0.1", 00:14:21.023 "trsvcid": "60510", 00:14:21.023 "trtype": "TCP" 00:14:21.023 }, 00:14:21.023 "qid": 0, 00:14:21.023 "state": "enabled", 00:14:21.023 "thread": "nvmf_tgt_poll_group_000" 00:14:21.023 } 00:14:21.023 ]' 00:14:21.023 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.281 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.281 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.281 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:21.281 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.281 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.281 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.281 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.540 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.106 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.365 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.931 00:14:23.189 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.189 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.189 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.448 { 00:14:23.448 "auth": { 00:14:23.448 "dhgroup": "ffdhe8192", 00:14:23.448 "digest": "sha512", 00:14:23.448 "state": "completed" 00:14:23.448 }, 00:14:23.448 "cntlid": 145, 00:14:23.448 "listen_address": { 00:14:23.448 "adrfam": "IPv4", 00:14:23.448 "traddr": "10.0.0.2", 00:14:23.448 "trsvcid": "4420", 00:14:23.448 "trtype": "TCP" 00:14:23.448 }, 00:14:23.448 "peer_address": { 00:14:23.448 "adrfam": "IPv4", 00:14:23.448 "traddr": "10.0.0.1", 00:14:23.448 "trsvcid": "60540", 00:14:23.448 "trtype": "TCP" 00:14:23.448 }, 00:14:23.448 "qid": 0, 00:14:23.448 "state": "enabled", 00:14:23.448 "thread": "nvmf_tgt_poll_group_000" 00:14:23.448 } 00:14:23.448 ]' 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.448 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.706 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:00:NWQ1NTVlMTBjMTYxMTRhNmI4Yzk1YzMyMDFlY2JkYzRlNGI4ZDJmODk3MjBhYTViefBrmQ==: --dhchap-ctrl-secret DHHC-1:03:MTQ4ODI3MGRiNmIwZWNkNzg1YWRmYTBlZjE4MjA5YTY2Y2VjOWY2OGVmODM3YjgwNjdhOTFjNDhiZWQ0MmQyZgGsTEU=: 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:24.640 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:25.207 2024/07/25 12:21:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:25.207 request: 00:14:25.207 { 00:14:25.207 "method": "bdev_nvme_attach_controller", 00:14:25.207 "params": { 00:14:25.207 "name": "nvme0", 00:14:25.207 "trtype": "tcp", 00:14:25.207 "traddr": "10.0.0.2", 00:14:25.207 "adrfam": "ipv4", 00:14:25.207 "trsvcid": "4420", 00:14:25.207 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:25.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2", 00:14:25.207 "prchk_reftag": false, 00:14:25.207 "prchk_guard": false, 00:14:25.207 "hdgst": false, 00:14:25.207 "ddgst": false, 00:14:25.207 "dhchap_key": "key2" 00:14:25.207 } 00:14:25.207 } 00:14:25.207 Got JSON-RPC error response 00:14:25.207 GoRPCClient: error on JSON-RPC call 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:25.207 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:25.773 2024/07/25 12:21:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:25.773 request: 00:14:25.773 { 00:14:25.773 "method": "bdev_nvme_attach_controller", 00:14:25.774 "params": { 00:14:25.774 "name": "nvme0", 00:14:25.774 "trtype": "tcp", 00:14:25.774 "traddr": "10.0.0.2", 00:14:25.774 "adrfam": "ipv4", 00:14:25.774 "trsvcid": "4420", 00:14:25.774 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:25.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2", 00:14:25.774 "prchk_reftag": false, 00:14:25.774 "prchk_guard": false, 00:14:25.774 "hdgst": false, 00:14:25.774 "ddgst": false, 00:14:25.774 "dhchap_key": "key1", 00:14:25.774 "dhchap_ctrlr_key": "ckey2" 00:14:25.774 } 00:14:25.774 } 00:14:25.774 Got JSON-RPC error response 00:14:25.774 GoRPCClient: error on JSON-RPC call 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key1 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.774 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.363 2024/07/25 12:21:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:26.363 request: 00:14:26.363 { 00:14:26.363 "method": "bdev_nvme_attach_controller", 00:14:26.363 "params": { 00:14:26.363 "name": "nvme0", 00:14:26.363 "trtype": "tcp", 00:14:26.363 "traddr": "10.0.0.2", 00:14:26.363 "adrfam": "ipv4", 00:14:26.363 "trsvcid": "4420", 00:14:26.363 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:26.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2", 00:14:26.363 "prchk_reftag": false, 00:14:26.363 "prchk_guard": false, 00:14:26.363 "hdgst": false, 00:14:26.363 "ddgst": false, 00:14:26.363 "dhchap_key": "key1", 00:14:26.363 "dhchap_ctrlr_key": "ckey1" 00:14:26.363 } 00:14:26.363 } 00:14:26.363 Got JSON-RPC error response 00:14:26.363 GoRPCClient: error on JSON-RPC call 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77113 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 77113 ']' 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 77113 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77113 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:26.363 killing process with pid 77113 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77113' 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 77113 00:14:26.363 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 77113 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82078 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82078 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82078 ']' 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.674 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82078 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82078 ']' 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:28.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.045 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.301 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.865 00:14:28.865 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.865 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.865 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.122 { 00:14:29.122 "auth": { 00:14:29.122 "dhgroup": "ffdhe8192", 00:14:29.122 "digest": "sha512", 00:14:29.122 "state": "completed" 00:14:29.122 }, 00:14:29.122 "cntlid": 1, 00:14:29.122 "listen_address": { 00:14:29.122 "adrfam": "IPv4", 00:14:29.122 "traddr": "10.0.0.2", 00:14:29.122 "trsvcid": "4420", 00:14:29.122 "trtype": "TCP" 00:14:29.122 }, 00:14:29.122 "peer_address": { 00:14:29.122 "adrfam": "IPv4", 00:14:29.122 "traddr": "10.0.0.1", 00:14:29.122 "trsvcid": "41794", 00:14:29.122 "trtype": "TCP" 00:14:29.122 }, 00:14:29.122 "qid": 0, 00:14:29.122 "state": "enabled", 00:14:29.122 "thread": "nvmf_tgt_poll_group_000" 00:14:29.122 } 00:14:29.122 ]' 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:29.122 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.379 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.379 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.379 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.636 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid 12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-secret DHHC-1:03:NWUwYzAxYjY0MTMyMTk0N2JlODk0NDlkZTY4MjE1M2IwYzRhMjVkYTg2NWNjZWVhZGIzYzRiZGJlODY1YWEzZLozqjE=: 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --dhchap-key key3 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:30.201 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:30.459 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.459 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:30.459 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.459 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:30.459 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.459 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:30.459 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.459 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.459 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.726 2024/07/25 12:21:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:30.726 request: 00:14:30.726 { 00:14:30.726 "method": "bdev_nvme_attach_controller", 00:14:30.726 "params": { 00:14:30.726 "name": "nvme0", 00:14:30.726 "trtype": "tcp", 00:14:30.726 "traddr": "10.0.0.2", 00:14:30.726 "adrfam": "ipv4", 00:14:30.726 "trsvcid": "4420", 00:14:30.726 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:30.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2", 00:14:30.726 "prchk_reftag": false, 00:14:30.726 "prchk_guard": false, 00:14:30.726 "hdgst": false, 00:14:30.726 "ddgst": false, 00:14:30.726 "dhchap_key": "key3" 00:14:30.726 } 00:14:30.726 } 00:14:30.726 Got JSON-RPC error response 00:14:30.726 GoRPCClient: error on JSON-RPC call 00:14:30.726 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:30.726 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:30.726 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:30.726 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:31.000 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:14:31.000 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:14:31.000 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:31.000 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:31.258 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.258 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:31.258 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.258 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:31.258 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.258 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:31.258 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.258 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.258 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.258 2024/07/25 12:21:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:31.258 request: 00:14:31.258 { 00:14:31.258 "method": "bdev_nvme_attach_controller", 00:14:31.258 "params": { 00:14:31.258 "name": "nvme0", 00:14:31.258 "trtype": "tcp", 00:14:31.258 "traddr": "10.0.0.2", 00:14:31.258 "adrfam": "ipv4", 00:14:31.258 "trsvcid": "4420", 00:14:31.258 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:31.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2", 00:14:31.258 "prchk_reftag": false, 00:14:31.258 "prchk_guard": false, 00:14:31.258 "hdgst": false, 00:14:31.258 "ddgst": false, 00:14:31.258 "dhchap_key": "key3" 00:14:31.258 } 00:14:31.258 } 00:14:31.258 Got JSON-RPC error response 00:14:31.258 GoRPCClient: error on JSON-RPC call 00:14:31.516 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:31.516 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:31.516 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:31.516 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:31.516 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:31.516 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:14:31.516 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:31.516 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:31.516 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:31.516 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:31.774 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.031 2024/07/25 12:21:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:32.031 request: 00:14:32.031 { 00:14:32.031 "method": "bdev_nvme_attach_controller", 00:14:32.031 "params": { 00:14:32.031 "name": "nvme0", 00:14:32.031 "trtype": "tcp", 00:14:32.031 "traddr": "10.0.0.2", 00:14:32.031 "adrfam": "ipv4", 00:14:32.031 "trsvcid": "4420", 00:14:32.031 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:32.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2", 00:14:32.031 "prchk_reftag": false, 00:14:32.031 "prchk_guard": false, 00:14:32.031 "hdgst": false, 00:14:32.031 "ddgst": false, 00:14:32.031 "dhchap_key": "key0", 00:14:32.031 "dhchap_ctrlr_key": "key1" 00:14:32.031 } 00:14:32.031 } 00:14:32.031 Got JSON-RPC error response 00:14:32.031 GoRPCClient: error on JSON-RPC call 00:14:32.031 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:32.032 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.032 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.032 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.032 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:32.032 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:32.289 00:14:32.289 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:14:32.289 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:14:32.289 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.546 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.546 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.546 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77157 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 77157 ']' 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 77157 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77157 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:32.803 killing process with pid 77157 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77157' 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 77157 00:14:32.803 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 77157 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:33.367 rmmod nvme_tcp 00:14:33.367 rmmod nvme_fabrics 00:14:33.367 rmmod nvme_keyring 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82078 ']' 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82078 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 82078 ']' 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 82078 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82078 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82078' 00:14:33.367 killing process with pid 82078 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 82078 00:14:33.367 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 82078 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ByC /tmp/spdk.key-sha256.GR2 /tmp/spdk.key-sha384.dvu /tmp/spdk.key-sha512.Pfy /tmp/spdk.key-sha512.soU /tmp/spdk.key-sha384.JmW /tmp/spdk.key-sha256.Tzy '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:33.625 00:14:33.625 real 2m55.841s 00:14:33.625 user 7m6.555s 00:14:33.625 sys 0m22.614s 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.625 ************************************ 00:14:33.625 END TEST nvmf_auth_target 00:14:33.625 ************************************ 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.625 ************************************ 00:14:33.625 START TEST nvmf_bdevio_no_huge 00:14:33.625 ************************************ 00:14:33.625 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:33.883 * Looking for test storage... 00:14:33.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.883 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:33.884 Cannot find device "nvmf_tgt_br" 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:33.884 Cannot find device "nvmf_tgt_br2" 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:33.884 Cannot find device "nvmf_tgt_br" 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:33.884 Cannot find device "nvmf_tgt_br2" 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:33.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:33.884 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:33.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.142 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:34.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:14:34.143 00:14:34.143 --- 10.0.0.2 ping statistics --- 00:14:34.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.143 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:34.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:34.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:14:34.143 00:14:34.143 --- 10.0.0.3 ping statistics --- 00:14:34.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.143 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:34.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:34.143 00:14:34.143 --- 10.0.0.1 ping statistics --- 00:14:34.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.143 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=82493 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 82493 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82493 ']' 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.143 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:34.401 [2024-07-25 12:21:49.024221] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:14:34.401 [2024-07-25 12:21:49.024334] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:34.401 [2024-07-25 12:21:49.168499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.694 [2024-07-25 12:21:49.315449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.694 [2024-07-25 12:21:49.315510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.694 [2024-07-25 12:21:49.315525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.694 [2024-07-25 12:21:49.315535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.694 [2024-07-25 12:21:49.315544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.694 [2024-07-25 12:21:49.315687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:34.694 [2024-07-25 12:21:49.316635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:34.694 [2024-07-25 12:21:49.316791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:34.694 [2024-07-25 12:21:49.316883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.260 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.260 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:14:35.260 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:35.260 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:35.260 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:35.260 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.260 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:35.260 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.260 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:35.260 [2024-07-25 12:21:50.113997] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.260 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.517 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:35.517 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.517 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:35.518 Malloc0 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:35.518 [2024-07-25 12:21:50.152242] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:35.518 { 00:14:35.518 "params": { 00:14:35.518 "name": "Nvme$subsystem", 00:14:35.518 "trtype": "$TEST_TRANSPORT", 00:14:35.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:35.518 "adrfam": "ipv4", 00:14:35.518 "trsvcid": "$NVMF_PORT", 00:14:35.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:35.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:35.518 "hdgst": ${hdgst:-false}, 00:14:35.518 "ddgst": ${ddgst:-false} 00:14:35.518 }, 00:14:35.518 "method": "bdev_nvme_attach_controller" 00:14:35.518 } 00:14:35.518 EOF 00:14:35.518 )") 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:35.518 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:35.518 "params": { 00:14:35.518 "name": "Nvme1", 00:14:35.518 "trtype": "tcp", 00:14:35.518 "traddr": "10.0.0.2", 00:14:35.518 "adrfam": "ipv4", 00:14:35.518 "trsvcid": "4420", 00:14:35.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.518 "hdgst": false, 00:14:35.518 "ddgst": false 00:14:35.518 }, 00:14:35.518 "method": "bdev_nvme_attach_controller" 00:14:35.518 }' 00:14:35.518 [2024-07-25 12:21:50.214156] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:14:35.518 [2024-07-25 12:21:50.214271] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82547 ] 00:14:35.518 [2024-07-25 12:21:50.359641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:35.776 [2024-07-25 12:21:50.505648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.776 [2024-07-25 12:21:50.505799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.776 [2024-07-25 12:21:50.505806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.034 I/O targets: 00:14:36.034 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:36.034 00:14:36.034 00:14:36.034 CUnit - A unit testing framework for C - Version 2.1-3 00:14:36.034 http://cunit.sourceforge.net/ 00:14:36.034 00:14:36.034 00:14:36.034 Suite: bdevio tests on: Nvme1n1 00:14:36.034 Test: blockdev write read block ...passed 00:14:36.034 Test: blockdev write zeroes read block ...passed 00:14:36.034 Test: blockdev write zeroes read no split ...passed 00:14:36.034 Test: blockdev write zeroes read split ...passed 00:14:36.034 Test: blockdev write zeroes read split partial ...passed 00:14:36.034 Test: blockdev reset ...[2024-07-25 12:21:50.827153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:36.034 [2024-07-25 12:21:50.827273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a460 (9): Bad file descriptor 00:14:36.034 [2024-07-25 12:21:50.844836] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:36.034 passed 00:14:36.034 Test: blockdev write read 8 blocks ...passed 00:14:36.034 Test: blockdev write read size > 128k ...passed 00:14:36.034 Test: blockdev write read invalid size ...passed 00:14:36.034 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:36.034 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:36.034 Test: blockdev write read max offset ...passed 00:14:36.292 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:36.292 Test: blockdev writev readv 8 blocks ...passed 00:14:36.292 Test: blockdev writev readv 30 x 1block ...passed 00:14:36.292 Test: blockdev writev readv block ...passed 00:14:36.292 Test: blockdev writev readv size > 128k ...passed 00:14:36.292 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:36.292 Test: blockdev comparev and writev ...[2024-07-25 12:21:51.018782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.293 [2024-07-25 12:21:51.018840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:36.293 [2024-07-25 12:21:51.018862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.293 [2024-07-25 12:21:51.018873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:36.293 [2024-07-25 12:21:51.019277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.293 [2024-07-25 12:21:51.019315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:36.293 [2024-07-25 12:21:51.019334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.293 [2024-07-25 12:21:51.019345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:36.293 [2024-07-25 12:21:51.019787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.293 [2024-07-25 12:21:51.019813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:36.293 [2024-07-25 12:21:51.019830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.293 [2024-07-25 12:21:51.019840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:36.293 [2024-07-25 12:21:51.020228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.293 [2024-07-25 12:21:51.020254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:36.293 [2024-07-25 12:21:51.020271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:36.293 [2024-07-25 12:21:51.020281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:36.293 passed 00:14:36.293 Test: blockdev nvme passthru rw ...passed 00:14:36.293 Test: blockdev nvme passthru vendor specific ...[2024-07-25 12:21:51.104741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.293 [2024-07-25 12:21:51.104795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:36.293 [2024-07-25 12:21:51.104942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.293 [2024-07-25 12:21:51.104960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:36.293 [2024-07-25 12:21:51.105088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.293 [2024-07-25 12:21:51.105110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:36.293 [2024-07-25 12:21:51.105225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.293 [2024-07-25 12:21:51.105243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:36.293 passed 00:14:36.293 Test: blockdev nvme admin passthru ...passed 00:14:36.551 Test: blockdev copy ...passed 00:14:36.551 00:14:36.551 Run Summary: Type Total Ran Passed Failed Inactive 00:14:36.551 suites 1 1 n/a 0 0 00:14:36.551 tests 23 23 23 0 0 00:14:36.551 asserts 152 152 152 0 n/a 00:14:36.551 00:14:36.551 Elapsed time = 0.924 seconds 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.808 rmmod nvme_tcp 00:14:36.808 rmmod nvme_fabrics 00:14:36.808 rmmod nvme_keyring 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 82493 ']' 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 82493 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82493 ']' 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82493 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:14:36.808 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:36.809 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82493 00:14:36.809 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:36.809 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:36.809 killing process with pid 82493 00:14:36.809 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82493' 00:14:36.809 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82493 00:14:36.809 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82493 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:37.374 00:14:37.374 real 0m3.616s 00:14:37.374 user 0m12.863s 00:14:37.374 sys 0m1.313s 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:37.374 ************************************ 00:14:37.374 END TEST nvmf_bdevio_no_huge 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:37.374 ************************************ 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.374 ************************************ 00:14:37.374 START TEST nvmf_tls 00:14:37.374 ************************************ 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:37.374 * Looking for test storage... 00:14:37.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.374 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:37.633 Cannot find device "nvmf_tgt_br" 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.633 Cannot find device "nvmf_tgt_br2" 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:37.633 Cannot find device "nvmf_tgt_br" 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:14:37.633 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:37.633 Cannot find device "nvmf_tgt_br2" 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:37.634 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:37.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:14:37.892 00:14:37.892 --- 10.0.0.2 ping statistics --- 00:14:37.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.892 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:37.892 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:37.892 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:37.892 00:14:37.892 --- 10.0.0.3 ping statistics --- 00:14:37.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.892 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:37.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:37.892 00:14:37.892 --- 10.0.0.1 ping statistics --- 00:14:37.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.892 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:14:37.892 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=82728 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 82728 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 82728 ']' 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.893 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.893 [2024-07-25 12:21:52.686985] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:14:37.893 [2024-07-25 12:21:52.687104] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.151 [2024-07-25 12:21:52.824321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.151 [2024-07-25 12:21:52.945976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.151 [2024-07-25 12:21:52.946051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.151 [2024-07-25 12:21:52.946078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.151 [2024-07-25 12:21:52.946086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.151 [2024-07-25 12:21:52.946093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.151 [2024-07-25 12:21:52.946122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.084 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.084 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:39.084 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.084 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.084 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.084 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.084 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:39.084 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:39.342 true 00:14:39.342 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:39.342 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:39.600 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:39.600 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:39.600 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:39.858 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:39.858 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:40.116 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:40.116 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:40.116 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:40.374 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:40.374 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:40.632 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:40.632 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:40.632 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:40.632 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:40.890 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:40.890 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:40.890 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:41.148 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:41.148 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:41.406 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:41.406 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:41.406 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:41.664 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:41.664 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.FXXJJgoVcr 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:41.927 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.94NU8MLevd 00:14:41.928 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:41.928 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:41.928 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.FXXJJgoVcr 00:14:41.928 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.94NU8MLevd 00:14:41.928 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:42.197 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:42.455 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.FXXJJgoVcr 00:14:42.455 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FXXJJgoVcr 00:14:42.455 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:42.713 [2024-07-25 12:21:57.558669] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.971 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:42.971 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:43.229 [2024-07-25 12:21:58.026761] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:43.229 [2024-07-25 12:21:58.026982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.229 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:43.489 malloc0 00:14:43.489 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:43.747 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FXXJJgoVcr 00:14:44.005 [2024-07-25 12:21:58.838149] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:44.005 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.FXXJJgoVcr 00:14:56.202 Initializing NVMe Controllers 00:14:56.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:56.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:56.202 Initialization complete. Launching workers. 00:14:56.202 ======================================================== 00:14:56.202 Latency(us) 00:14:56.202 Device Information : IOPS MiB/s Average min max 00:14:56.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9535.58 37.25 6713.18 2226.44 8372.93 00:14:56.203 ======================================================== 00:14:56.203 Total : 9535.58 37.25 6713.18 2226.44 8372.93 00:14:56.203 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FXXJJgoVcr 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FXXJJgoVcr' 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83086 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83086 /var/tmp/bdevperf.sock 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83086 ']' 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.203 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.203 [2024-07-25 12:22:09.128080] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:14:56.203 [2024-07-25 12:22:09.128196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83086 ] 00:14:56.203 [2024-07-25 12:22:09.286261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.203 [2024-07-25 12:22:09.417343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.203 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.203 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:56.203 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FXXJJgoVcr 00:14:56.203 [2024-07-25 12:22:10.365820] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:56.203 [2024-07-25 12:22:10.365947] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:56.203 TLSTESTn1 00:14:56.203 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:56.203 Running I/O for 10 seconds... 00:15:06.169 00:15:06.169 Latency(us) 00:15:06.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.169 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:06.169 Verification LBA range: start 0x0 length 0x2000 00:15:06.169 TLSTESTn1 : 10.02 3818.21 14.91 0.00 0.00 33456.54 7328.12 20375.74 00:15:06.169 =================================================================================================================== 00:15:06.169 Total : 3818.21 14.91 0.00 0.00 33456.54 7328.12 20375.74 00:15:06.169 0 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 83086 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83086 ']' 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83086 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83086 00:15:06.169 killing process with pid 83086 00:15:06.169 Received shutdown signal, test time was about 10.000000 seconds 00:15:06.169 00:15:06.169 Latency(us) 00:15:06.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.169 =================================================================================================================== 00:15:06.169 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83086' 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83086 00:15:06.169 [2024-07-25 12:22:20.653858] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83086 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.94NU8MLevd 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.94NU8MLevd 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.94NU8MLevd 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.94NU8MLevd' 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83238 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:06.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83238 /var/tmp/bdevperf.sock 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83238 ']' 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.169 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.170 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.170 [2024-07-25 12:22:20.969142] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:06.170 [2024-07-25 12:22:20.969268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83238 ] 00:15:06.427 [2024-07-25 12:22:21.109081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.427 [2024-07-25 12:22:21.230972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.362 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.362 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:07.362 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.94NU8MLevd 00:15:07.362 [2024-07-25 12:22:22.191512] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:07.362 [2024-07-25 12:22:22.191632] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:07.362 [2024-07-25 12:22:22.201138] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:07.362 [2024-07-25 12:22:22.201365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53ca0 (107): Transport endpoint is not connected 00:15:07.362 [2024-07-25 12:22:22.202346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53ca0 (9): Bad file descriptor 00:15:07.362 [2024-07-25 12:22:22.203341] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:07.362 [2024-07-25 12:22:22.203383] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:07.362 [2024-07-25 12:22:22.203399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:07.362 2024/07/25 12:22:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.94NU8MLevd subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:07.362 request: 00:15:07.362 { 00:15:07.362 "method": "bdev_nvme_attach_controller", 00:15:07.362 "params": { 00:15:07.362 "name": "TLSTEST", 00:15:07.362 "trtype": "tcp", 00:15:07.362 "traddr": "10.0.0.2", 00:15:07.362 "adrfam": "ipv4", 00:15:07.362 "trsvcid": "4420", 00:15:07.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.362 "prchk_reftag": false, 00:15:07.362 "prchk_guard": false, 00:15:07.362 "hdgst": false, 00:15:07.362 "ddgst": false, 00:15:07.362 "psk": "/tmp/tmp.94NU8MLevd" 00:15:07.362 } 00:15:07.362 } 00:15:07.362 Got JSON-RPC error response 00:15:07.362 GoRPCClient: error on JSON-RPC call 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83238 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83238 ']' 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83238 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83238 00:15:07.621 killing process with pid 83238 00:15:07.621 Received shutdown signal, test time was about 10.000000 seconds 00:15:07.621 00:15:07.621 Latency(us) 00:15:07.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.621 =================================================================================================================== 00:15:07.621 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83238' 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83238 00:15:07.621 [2024-07-25 12:22:22.258921] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:07.621 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83238 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FXXJJgoVcr 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FXXJJgoVcr 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FXXJJgoVcr 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FXXJJgoVcr' 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83289 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83289 /var/tmp/bdevperf.sock 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83289 ']' 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.879 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.879 [2024-07-25 12:22:22.554556] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:07.879 [2024-07-25 12:22:22.554656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83289 ] 00:15:07.879 [2024-07-25 12:22:22.685717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.137 [2024-07-25 12:22:22.799413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.709 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.709 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:08.709 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.FXXJJgoVcr 00:15:08.971 [2024-07-25 12:22:23.812743] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.971 [2024-07-25 12:22:23.812858] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:08.971 [2024-07-25 12:22:23.821241] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:08.971 [2024-07-25 12:22:23.821283] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:08.971 [2024-07-25 12:22:23.821348] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:08.971 [2024-07-25 12:22:23.821557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22caca0 (107): Transport endpoint is not connected 00:15:08.971 [2024-07-25 12:22:23.822545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22caca0 (9): Bad file descriptor 00:15:08.971 [2024-07-25 12:22:23.823542] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:08.971 [2024-07-25 12:22:23.823569] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:08.971 [2024-07-25 12:22:23.823585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:08.971 2024/07/25 12:22:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.FXXJJgoVcr subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:08.971 request: 00:15:08.971 { 00:15:08.971 "method": "bdev_nvme_attach_controller", 00:15:08.971 "params": { 00:15:08.971 "name": "TLSTEST", 00:15:08.971 "trtype": "tcp", 00:15:08.971 "traddr": "10.0.0.2", 00:15:08.971 "adrfam": "ipv4", 00:15:08.971 "trsvcid": "4420", 00:15:08.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.971 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:08.971 "prchk_reftag": false, 00:15:08.971 "prchk_guard": false, 00:15:08.971 "hdgst": false, 00:15:08.971 "ddgst": false, 00:15:08.971 "psk": "/tmp/tmp.FXXJJgoVcr" 00:15:08.971 } 00:15:08.971 } 00:15:08.971 Got JSON-RPC error response 00:15:08.971 GoRPCClient: error on JSON-RPC call 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83289 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83289 ']' 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83289 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83289 00:15:09.229 killing process with pid 83289 00:15:09.229 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.229 00:15:09.229 Latency(us) 00:15:09.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.229 =================================================================================================================== 00:15:09.229 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83289' 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83289 00:15:09.229 [2024-07-25 12:22:23.877737] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:09.229 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83289 00:15:09.487 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FXXJJgoVcr 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FXXJJgoVcr 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FXXJJgoVcr 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FXXJJgoVcr' 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83329 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83329 /var/tmp/bdevperf.sock 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83329 ']' 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.488 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.488 [2024-07-25 12:22:24.198323] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:09.488 [2024-07-25 12:22:24.198463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83329 ] 00:15:09.488 [2024-07-25 12:22:24.335028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.746 [2024-07-25 12:22:24.460796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.679 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.680 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:10.680 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FXXJJgoVcr 00:15:10.680 [2024-07-25 12:22:25.522014] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:10.680 [2024-07-25 12:22:25.522141] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:10.680 [2024-07-25 12:22:25.531633] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:10.680 [2024-07-25 12:22:25.531712] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:10.680 [2024-07-25 12:22:25.531789] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:10.680 [2024-07-25 12:22:25.531942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd8ca0 (107): Transport endpoint is not connected 00:15:10.680 [2024-07-25 12:22:25.532915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd8ca0 (9): Bad file descriptor 00:15:10.680 [2024-07-25 12:22:25.533911] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:10.680 [2024-07-25 12:22:25.533957] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:10.680 [2024-07-25 12:22:25.533988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:10.680 2024/07/25 12:22:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.FXXJJgoVcr subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:10.680 request: 00:15:10.680 { 00:15:10.680 "method": "bdev_nvme_attach_controller", 00:15:10.680 "params": { 00:15:10.680 "name": "TLSTEST", 00:15:10.680 "trtype": "tcp", 00:15:10.680 "traddr": "10.0.0.2", 00:15:10.680 "adrfam": "ipv4", 00:15:10.680 "trsvcid": "4420", 00:15:10.680 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:10.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.680 "prchk_reftag": false, 00:15:10.680 "prchk_guard": false, 00:15:10.680 "hdgst": false, 00:15:10.680 "ddgst": false, 00:15:10.680 "psk": "/tmp/tmp.FXXJJgoVcr" 00:15:10.680 } 00:15:10.680 } 00:15:10.680 Got JSON-RPC error response 00:15:10.680 GoRPCClient: error on JSON-RPC call 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83329 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83329 ']' 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83329 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83329 00:15:10.938 killing process with pid 83329 00:15:10.938 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.938 00:15:10.938 Latency(us) 00:15:10.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.938 =================================================================================================================== 00:15:10.938 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83329' 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83329 00:15:10.938 [2024-07-25 12:22:25.587670] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:10.938 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83329 00:15:11.196 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:11.196 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:11.196 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.196 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.196 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.196 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:11.196 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:11.196 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:11.196 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83379 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83379 /var/tmp/bdevperf.sock 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83379 ']' 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.197 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.197 [2024-07-25 12:22:25.919836] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:11.197 [2024-07-25 12:22:25.919962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83379 ] 00:15:11.455 [2024-07-25 12:22:26.063899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.455 [2024-07-25 12:22:26.204794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.396 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.396 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:12.396 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:12.396 [2024-07-25 12:22:27.211859] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:12.396 [2024-07-25 12:22:27.213767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115e240 (9): Bad file descriptor 00:15:12.396 [2024-07-25 12:22:27.214762] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:12.396 [2024-07-25 12:22:27.214787] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:12.396 [2024-07-25 12:22:27.214802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:12.396 2024/07/25 12:22:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:12.396 request: 00:15:12.396 { 00:15:12.396 "method": "bdev_nvme_attach_controller", 00:15:12.396 "params": { 00:15:12.396 "name": "TLSTEST", 00:15:12.396 "trtype": "tcp", 00:15:12.396 "traddr": "10.0.0.2", 00:15:12.396 "adrfam": "ipv4", 00:15:12.396 "trsvcid": "4420", 00:15:12.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.396 "prchk_reftag": false, 00:15:12.397 "prchk_guard": false, 00:15:12.397 "hdgst": false, 00:15:12.397 "ddgst": false 00:15:12.397 } 00:15:12.397 } 00:15:12.397 Got JSON-RPC error response 00:15:12.397 GoRPCClient: error on JSON-RPC call 00:15:12.397 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83379 00:15:12.397 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83379 ']' 00:15:12.397 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83379 00:15:12.397 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:12.397 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.397 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83379 00:15:12.655 killing process with pid 83379 00:15:12.655 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.655 00:15:12.655 Latency(us) 00:15:12.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.655 =================================================================================================================== 00:15:12.655 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83379' 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83379 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83379 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 82728 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 82728 ']' 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 82728 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.655 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82728 00:15:12.914 killing process with pid 82728 00:15:12.914 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:12.914 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:12.914 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82728' 00:15:12.914 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 82728 00:15:12.914 [2024-07-25 12:22:27.531011] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:12.914 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 82728 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.gqH2L2X4E7 00:15:13.183 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.gqH2L2X4E7 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83437 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83437 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83437 ']' 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.184 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.184 [2024-07-25 12:22:27.912515] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:13.184 [2024-07-25 12:22:27.912640] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.462 [2024-07-25 12:22:28.056916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.462 [2024-07-25 12:22:28.198610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.462 [2024-07-25 12:22:28.198678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.462 [2024-07-25 12:22:28.198692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.462 [2024-07-25 12:22:28.198702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.462 [2024-07-25 12:22:28.198711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.462 [2024-07-25 12:22:28.198752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.396 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.396 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:14.396 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:14.396 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:14.396 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.396 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.396 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.gqH2L2X4E7 00:15:14.396 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gqH2L2X4E7 00:15:14.396 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:14.396 [2024-07-25 12:22:29.214605] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.396 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:14.962 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:14.962 [2024-07-25 12:22:29.770784] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:14.962 [2024-07-25 12:22:29.771027] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.962 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:15.221 malloc0 00:15:15.221 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gqH2L2X4E7 00:15:15.788 [2024-07-25 12:22:30.599035] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gqH2L2X4E7 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gqH2L2X4E7' 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83544 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83544 /var/tmp/bdevperf.sock 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83544 ']' 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.788 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.045 [2024-07-25 12:22:30.676477] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:16.046 [2024-07-25 12:22:30.676562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83544 ] 00:15:16.046 [2024-07-25 12:22:30.813214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.303 [2024-07-25 12:22:30.951135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.868 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.868 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:16.868 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gqH2L2X4E7 00:15:17.126 [2024-07-25 12:22:31.983511] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.126 [2024-07-25 12:22:31.983627] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:17.385 TLSTESTn1 00:15:17.385 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:17.385 Running I/O for 10 seconds... 00:15:27.353 00:15:27.353 Latency(us) 00:15:27.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.353 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:27.353 Verification LBA range: start 0x0 length 0x2000 00:15:27.353 TLSTESTn1 : 10.02 3695.60 14.44 0.00 0.00 34554.72 6970.65 25261.15 00:15:27.353 =================================================================================================================== 00:15:27.353 Total : 3695.60 14.44 0.00 0.00 34554.72 6970.65 25261.15 00:15:27.353 0 00:15:27.353 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:27.353 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 83544 00:15:27.353 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83544 ']' 00:15:27.353 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83544 00:15:27.611 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:27.611 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.611 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83544 00:15:27.611 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:27.611 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:27.612 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83544' 00:15:27.612 killing process with pid 83544 00:15:27.612 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83544 00:15:27.612 Received shutdown signal, test time was about 10.000000 seconds 00:15:27.612 00:15:27.612 Latency(us) 00:15:27.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.612 =================================================================================================================== 00:15:27.612 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.612 [2024-07-25 12:22:42.247748] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:27.612 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83544 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.gqH2L2X4E7 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gqH2L2X4E7 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gqH2L2X4E7 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gqH2L2X4E7 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gqH2L2X4E7' 00:15:27.870 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:27.871 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83693 00:15:27.871 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:27.871 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:27.871 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83693 /var/tmp/bdevperf.sock 00:15:27.871 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83693 ']' 00:15:27.871 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.871 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:27.871 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.871 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:27.871 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.871 [2024-07-25 12:22:42.556984] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:27.871 [2024-07-25 12:22:42.557558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83693 ] 00:15:27.871 [2024-07-25 12:22:42.702143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.129 [2024-07-25 12:22:42.815950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.701 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.701 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:28.701 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gqH2L2X4E7 00:15:29.267 [2024-07-25 12:22:43.902618] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:29.267 [2024-07-25 12:22:43.902710] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:29.267 [2024-07-25 12:22:43.902723] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.gqH2L2X4E7 00:15:29.267 2024/07/25 12:22:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.gqH2L2X4E7 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:15:29.267 request: 00:15:29.267 { 00:15:29.267 "method": "bdev_nvme_attach_controller", 00:15:29.267 "params": { 00:15:29.267 "name": "TLSTEST", 00:15:29.267 "trtype": "tcp", 00:15:29.267 "traddr": "10.0.0.2", 00:15:29.267 "adrfam": "ipv4", 00:15:29.267 "trsvcid": "4420", 00:15:29.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:29.267 "prchk_reftag": false, 00:15:29.267 "prchk_guard": false, 00:15:29.267 "hdgst": false, 00:15:29.267 "ddgst": false, 00:15:29.267 "psk": "/tmp/tmp.gqH2L2X4E7" 00:15:29.267 } 00:15:29.267 } 00:15:29.267 Got JSON-RPC error response 00:15:29.267 GoRPCClient: error on JSON-RPC call 00:15:29.267 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83693 00:15:29.267 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83693 ']' 00:15:29.267 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83693 00:15:29.267 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:29.267 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.267 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83693 00:15:29.267 killing process with pid 83693 00:15:29.267 Received shutdown signal, test time was about 10.000000 seconds 00:15:29.267 00:15:29.267 Latency(us) 00:15:29.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.267 =================================================================================================================== 00:15:29.267 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:29.267 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:29.267 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:29.267 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83693' 00:15:29.267 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83693 00:15:29.268 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83693 00:15:29.526 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:29.526 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 83437 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83437 ']' 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83437 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83437 00:15:29.527 killing process with pid 83437 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83437' 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83437 00:15:29.527 [2024-07-25 12:22:44.210650] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:29.527 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83437 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83748 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83748 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83748 ']' 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.785 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.785 [2024-07-25 12:22:44.518638] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:29.785 [2024-07-25 12:22:44.518900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.043 [2024-07-25 12:22:44.656489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.043 [2024-07-25 12:22:44.768941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.043 [2024-07-25 12:22:44.769005] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.043 [2024-07-25 12:22:44.769018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.043 [2024-07-25 12:22:44.769027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.043 [2024-07-25 12:22:44.769035] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.043 [2024-07-25 12:22:44.769075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.gqH2L2X4E7 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gqH2L2X4E7 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.gqH2L2X4E7 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gqH2L2X4E7 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:30.977 [2024-07-25 12:22:45.782736] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.977 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:31.235 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:31.493 [2024-07-25 12:22:46.314871] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:31.493 [2024-07-25 12:22:46.315112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.493 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:31.751 malloc0 00:15:31.751 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:32.317 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gqH2L2X4E7 00:15:32.317 [2024-07-25 12:22:47.099173] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:32.317 [2024-07-25 12:22:47.099218] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:32.317 [2024-07-25 12:22:47.099267] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:32.317 2024/07/25 12:22:47 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.gqH2L2X4E7], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:15:32.317 request: 00:15:32.317 { 00:15:32.317 "method": "nvmf_subsystem_add_host", 00:15:32.317 "params": { 00:15:32.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.317 "host": "nqn.2016-06.io.spdk:host1", 00:15:32.317 "psk": "/tmp/tmp.gqH2L2X4E7" 00:15:32.317 } 00:15:32.317 } 00:15:32.317 Got JSON-RPC error response 00:15:32.317 GoRPCClient: error on JSON-RPC call 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 83748 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83748 ']' 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83748 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83748 00:15:32.317 killing process with pid 83748 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83748' 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83748 00:15:32.317 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83748 00:15:32.575 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.gqH2L2X4E7 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83860 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83860 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83860 ']' 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.576 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.834 [2024-07-25 12:22:47.473044] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:32.834 [2024-07-25 12:22:47.473189] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.834 [2024-07-25 12:22:47.626393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.091 [2024-07-25 12:22:47.740950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.091 [2024-07-25 12:22:47.741015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.091 [2024-07-25 12:22:47.741043] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.091 [2024-07-25 12:22:47.741051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.091 [2024-07-25 12:22:47.741074] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.091 [2024-07-25 12:22:47.741108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.657 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.657 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:33.657 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.657 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:33.657 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.657 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.657 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.gqH2L2X4E7 00:15:33.657 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gqH2L2X4E7 00:15:33.657 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:34.223 [2024-07-25 12:22:48.794037] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.223 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:34.480 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:34.738 [2024-07-25 12:22:49.362194] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:34.738 [2024-07-25 12:22:49.362422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.738 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:34.996 malloc0 00:15:34.996 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:35.254 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gqH2L2X4E7 00:15:35.512 [2024-07-25 12:22:50.138794] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:35.512 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=83963 00:15:35.512 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:35.512 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:35.512 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 83963 /var/tmp/bdevperf.sock 00:15:35.512 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83963 ']' 00:15:35.512 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.512 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.512 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.512 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.512 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.512 [2024-07-25 12:22:50.205074] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:35.512 [2024-07-25 12:22:50.205163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83963 ] 00:15:35.512 [2024-07-25 12:22:50.342329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.769 [2024-07-25 12:22:50.459149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.334 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.334 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:36.334 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gqH2L2X4E7 00:15:36.592 [2024-07-25 12:22:51.414861] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:36.592 [2024-07-25 12:22:51.415012] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:36.850 TLSTESTn1 00:15:36.850 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:37.108 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:37.108 "subsystems": [ 00:15:37.108 { 00:15:37.108 "subsystem": "keyring", 00:15:37.108 "config": [] 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "subsystem": "iobuf", 00:15:37.108 "config": [ 00:15:37.108 { 00:15:37.108 "method": "iobuf_set_options", 00:15:37.108 "params": { 00:15:37.108 "large_bufsize": 135168, 00:15:37.108 "large_pool_count": 1024, 00:15:37.108 "small_bufsize": 8192, 00:15:37.108 "small_pool_count": 8192 00:15:37.108 } 00:15:37.108 } 00:15:37.108 ] 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "subsystem": "sock", 00:15:37.108 "config": [ 00:15:37.108 { 00:15:37.108 "method": "sock_set_default_impl", 00:15:37.108 "params": { 00:15:37.108 "impl_name": "posix" 00:15:37.108 } 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "method": "sock_impl_set_options", 00:15:37.108 "params": { 00:15:37.108 "enable_ktls": false, 00:15:37.108 "enable_placement_id": 0, 00:15:37.108 "enable_quickack": false, 00:15:37.108 "enable_recv_pipe": true, 00:15:37.108 "enable_zerocopy_send_client": false, 00:15:37.108 "enable_zerocopy_send_server": true, 00:15:37.108 "impl_name": "ssl", 00:15:37.108 "recv_buf_size": 4096, 00:15:37.108 "send_buf_size": 4096, 00:15:37.108 "tls_version": 0, 00:15:37.108 "zerocopy_threshold": 0 00:15:37.108 } 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "method": "sock_impl_set_options", 00:15:37.108 "params": { 00:15:37.108 "enable_ktls": false, 00:15:37.108 "enable_placement_id": 0, 00:15:37.108 "enable_quickack": false, 00:15:37.108 "enable_recv_pipe": true, 00:15:37.108 "enable_zerocopy_send_client": false, 00:15:37.108 "enable_zerocopy_send_server": true, 00:15:37.108 "impl_name": "posix", 00:15:37.108 "recv_buf_size": 2097152, 00:15:37.108 "send_buf_size": 2097152, 00:15:37.108 "tls_version": 0, 00:15:37.108 "zerocopy_threshold": 0 00:15:37.108 } 00:15:37.108 } 00:15:37.108 ] 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "subsystem": "vmd", 00:15:37.108 "config": [] 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "subsystem": "accel", 00:15:37.108 "config": [ 00:15:37.108 { 00:15:37.108 "method": "accel_set_options", 00:15:37.108 "params": { 00:15:37.108 "buf_count": 2048, 00:15:37.108 "large_cache_size": 16, 00:15:37.108 "sequence_count": 2048, 00:15:37.108 "small_cache_size": 128, 00:15:37.108 "task_count": 2048 00:15:37.108 } 00:15:37.108 } 00:15:37.108 ] 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "subsystem": "bdev", 00:15:37.108 "config": [ 00:15:37.108 { 00:15:37.108 "method": "bdev_set_options", 00:15:37.108 "params": { 00:15:37.108 "bdev_auto_examine": true, 00:15:37.108 "bdev_io_cache_size": 256, 00:15:37.108 "bdev_io_pool_size": 65535, 00:15:37.108 "iobuf_large_cache_size": 16, 00:15:37.108 "iobuf_small_cache_size": 128 00:15:37.108 } 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "method": "bdev_raid_set_options", 00:15:37.108 "params": { 00:15:37.108 "process_max_bandwidth_mb_sec": 0, 00:15:37.108 "process_window_size_kb": 1024 00:15:37.108 } 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "method": "bdev_iscsi_set_options", 00:15:37.108 "params": { 00:15:37.108 "timeout_sec": 30 00:15:37.108 } 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "method": "bdev_nvme_set_options", 00:15:37.108 "params": { 00:15:37.108 "action_on_timeout": "none", 00:15:37.108 "allow_accel_sequence": false, 00:15:37.108 "arbitration_burst": 0, 00:15:37.108 "bdev_retry_count": 3, 00:15:37.108 "ctrlr_loss_timeout_sec": 0, 00:15:37.108 "delay_cmd_submit": true, 00:15:37.108 "dhchap_dhgroups": [ 00:15:37.108 "null", 00:15:37.108 "ffdhe2048", 00:15:37.108 "ffdhe3072", 00:15:37.108 "ffdhe4096", 00:15:37.108 "ffdhe6144", 00:15:37.108 "ffdhe8192" 00:15:37.108 ], 00:15:37.108 "dhchap_digests": [ 00:15:37.108 "sha256", 00:15:37.108 "sha384", 00:15:37.108 "sha512" 00:15:37.108 ], 00:15:37.108 "disable_auto_failback": false, 00:15:37.108 "fast_io_fail_timeout_sec": 0, 00:15:37.108 "generate_uuids": false, 00:15:37.108 "high_priority_weight": 0, 00:15:37.108 "io_path_stat": false, 00:15:37.108 "io_queue_requests": 0, 00:15:37.108 "keep_alive_timeout_ms": 10000, 00:15:37.108 "low_priority_weight": 0, 00:15:37.108 "medium_priority_weight": 0, 00:15:37.108 "nvme_adminq_poll_period_us": 10000, 00:15:37.108 "nvme_error_stat": false, 00:15:37.108 "nvme_ioq_poll_period_us": 0, 00:15:37.108 "rdma_cm_event_timeout_ms": 0, 00:15:37.108 "rdma_max_cq_size": 0, 00:15:37.108 "rdma_srq_size": 0, 00:15:37.108 "reconnect_delay_sec": 0, 00:15:37.108 "timeout_admin_us": 0, 00:15:37.108 "timeout_us": 0, 00:15:37.108 "transport_ack_timeout": 0, 00:15:37.108 "transport_retry_count": 4, 00:15:37.108 "transport_tos": 0 00:15:37.108 } 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "method": "bdev_nvme_set_hotplug", 00:15:37.108 "params": { 00:15:37.108 "enable": false, 00:15:37.108 "period_us": 100000 00:15:37.108 } 00:15:37.108 }, 00:15:37.108 { 00:15:37.108 "method": "bdev_malloc_create", 00:15:37.108 "params": { 00:15:37.108 "block_size": 4096, 00:15:37.108 "dif_is_head_of_md": false, 00:15:37.108 "dif_pi_format": 0, 00:15:37.108 "dif_type": 0, 00:15:37.108 "md_size": 0, 00:15:37.108 "name": "malloc0", 00:15:37.108 "num_blocks": 8192, 00:15:37.108 "optimal_io_boundary": 0, 00:15:37.109 "physical_block_size": 4096, 00:15:37.109 "uuid": "80921e4c-59a8-468a-9f35-525c7787e027" 00:15:37.109 } 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "method": "bdev_wait_for_examine" 00:15:37.109 } 00:15:37.109 ] 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "subsystem": "nbd", 00:15:37.109 "config": [] 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "subsystem": "scheduler", 00:15:37.109 "config": [ 00:15:37.109 { 00:15:37.109 "method": "framework_set_scheduler", 00:15:37.109 "params": { 00:15:37.109 "name": "static" 00:15:37.109 } 00:15:37.109 } 00:15:37.109 ] 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "subsystem": "nvmf", 00:15:37.109 "config": [ 00:15:37.109 { 00:15:37.109 "method": "nvmf_set_config", 00:15:37.109 "params": { 00:15:37.109 "admin_cmd_passthru": { 00:15:37.109 "identify_ctrlr": false 00:15:37.109 }, 00:15:37.109 "discovery_filter": "match_any" 00:15:37.109 } 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "method": "nvmf_set_max_subsystems", 00:15:37.109 "params": { 00:15:37.109 "max_subsystems": 1024 00:15:37.109 } 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "method": "nvmf_set_crdt", 00:15:37.109 "params": { 00:15:37.109 "crdt1": 0, 00:15:37.109 "crdt2": 0, 00:15:37.109 "crdt3": 0 00:15:37.109 } 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "method": "nvmf_create_transport", 00:15:37.109 "params": { 00:15:37.109 "abort_timeout_sec": 1, 00:15:37.109 "ack_timeout": 0, 00:15:37.109 "buf_cache_size": 4294967295, 00:15:37.109 "c2h_success": false, 00:15:37.109 "data_wr_pool_size": 0, 00:15:37.109 "dif_insert_or_strip": false, 00:15:37.109 "in_capsule_data_size": 4096, 00:15:37.109 "io_unit_size": 131072, 00:15:37.109 "max_aq_depth": 128, 00:15:37.109 "max_io_qpairs_per_ctrlr": 127, 00:15:37.109 "max_io_size": 131072, 00:15:37.109 "max_queue_depth": 128, 00:15:37.109 "num_shared_buffers": 511, 00:15:37.109 "sock_priority": 0, 00:15:37.109 "trtype": "TCP", 00:15:37.109 "zcopy": false 00:15:37.109 } 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "method": "nvmf_create_subsystem", 00:15:37.109 "params": { 00:15:37.109 "allow_any_host": false, 00:15:37.109 "ana_reporting": false, 00:15:37.109 "max_cntlid": 65519, 00:15:37.109 "max_namespaces": 10, 00:15:37.109 "min_cntlid": 1, 00:15:37.109 "model_number": "SPDK bdev Controller", 00:15:37.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.109 "serial_number": "SPDK00000000000001" 00:15:37.109 } 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "method": "nvmf_subsystem_add_host", 00:15:37.109 "params": { 00:15:37.109 "host": "nqn.2016-06.io.spdk:host1", 00:15:37.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.109 "psk": "/tmp/tmp.gqH2L2X4E7" 00:15:37.109 } 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "method": "nvmf_subsystem_add_ns", 00:15:37.109 "params": { 00:15:37.109 "namespace": { 00:15:37.109 "bdev_name": "malloc0", 00:15:37.109 "nguid": "80921E4C59A8468A9F35525C7787E027", 00:15:37.109 "no_auto_visible": false, 00:15:37.109 "nsid": 1, 00:15:37.109 "uuid": "80921e4c-59a8-468a-9f35-525c7787e027" 00:15:37.109 }, 00:15:37.109 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:37.109 } 00:15:37.109 }, 00:15:37.109 { 00:15:37.109 "method": "nvmf_subsystem_add_listener", 00:15:37.109 "params": { 00:15:37.109 "listen_address": { 00:15:37.109 "adrfam": "IPv4", 00:15:37.109 "traddr": "10.0.0.2", 00:15:37.109 "trsvcid": "4420", 00:15:37.109 "trtype": "TCP" 00:15:37.109 }, 00:15:37.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.109 "secure_channel": true 00:15:37.109 } 00:15:37.109 } 00:15:37.109 ] 00:15:37.109 } 00:15:37.109 ] 00:15:37.109 }' 00:15:37.109 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:37.372 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:37.372 "subsystems": [ 00:15:37.372 { 00:15:37.372 "subsystem": "keyring", 00:15:37.372 "config": [] 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "subsystem": "iobuf", 00:15:37.372 "config": [ 00:15:37.372 { 00:15:37.372 "method": "iobuf_set_options", 00:15:37.372 "params": { 00:15:37.372 "large_bufsize": 135168, 00:15:37.372 "large_pool_count": 1024, 00:15:37.372 "small_bufsize": 8192, 00:15:37.372 "small_pool_count": 8192 00:15:37.372 } 00:15:37.372 } 00:15:37.372 ] 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "subsystem": "sock", 00:15:37.372 "config": [ 00:15:37.372 { 00:15:37.372 "method": "sock_set_default_impl", 00:15:37.372 "params": { 00:15:37.372 "impl_name": "posix" 00:15:37.372 } 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "method": "sock_impl_set_options", 00:15:37.372 "params": { 00:15:37.372 "enable_ktls": false, 00:15:37.372 "enable_placement_id": 0, 00:15:37.372 "enable_quickack": false, 00:15:37.372 "enable_recv_pipe": true, 00:15:37.372 "enable_zerocopy_send_client": false, 00:15:37.372 "enable_zerocopy_send_server": true, 00:15:37.372 "impl_name": "ssl", 00:15:37.372 "recv_buf_size": 4096, 00:15:37.372 "send_buf_size": 4096, 00:15:37.372 "tls_version": 0, 00:15:37.372 "zerocopy_threshold": 0 00:15:37.372 } 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "method": "sock_impl_set_options", 00:15:37.372 "params": { 00:15:37.372 "enable_ktls": false, 00:15:37.372 "enable_placement_id": 0, 00:15:37.372 "enable_quickack": false, 00:15:37.372 "enable_recv_pipe": true, 00:15:37.372 "enable_zerocopy_send_client": false, 00:15:37.372 "enable_zerocopy_send_server": true, 00:15:37.372 "impl_name": "posix", 00:15:37.372 "recv_buf_size": 2097152, 00:15:37.372 "send_buf_size": 2097152, 00:15:37.372 "tls_version": 0, 00:15:37.372 "zerocopy_threshold": 0 00:15:37.372 } 00:15:37.372 } 00:15:37.372 ] 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "subsystem": "vmd", 00:15:37.372 "config": [] 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "subsystem": "accel", 00:15:37.372 "config": [ 00:15:37.372 { 00:15:37.372 "method": "accel_set_options", 00:15:37.372 "params": { 00:15:37.372 "buf_count": 2048, 00:15:37.372 "large_cache_size": 16, 00:15:37.372 "sequence_count": 2048, 00:15:37.372 "small_cache_size": 128, 00:15:37.372 "task_count": 2048 00:15:37.372 } 00:15:37.372 } 00:15:37.372 ] 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "subsystem": "bdev", 00:15:37.372 "config": [ 00:15:37.372 { 00:15:37.372 "method": "bdev_set_options", 00:15:37.372 "params": { 00:15:37.372 "bdev_auto_examine": true, 00:15:37.372 "bdev_io_cache_size": 256, 00:15:37.372 "bdev_io_pool_size": 65535, 00:15:37.372 "iobuf_large_cache_size": 16, 00:15:37.372 "iobuf_small_cache_size": 128 00:15:37.372 } 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "method": "bdev_raid_set_options", 00:15:37.372 "params": { 00:15:37.372 "process_max_bandwidth_mb_sec": 0, 00:15:37.372 "process_window_size_kb": 1024 00:15:37.372 } 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "method": "bdev_iscsi_set_options", 00:15:37.372 "params": { 00:15:37.372 "timeout_sec": 30 00:15:37.372 } 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "method": "bdev_nvme_set_options", 00:15:37.372 "params": { 00:15:37.372 "action_on_timeout": "none", 00:15:37.372 "allow_accel_sequence": false, 00:15:37.372 "arbitration_burst": 0, 00:15:37.372 "bdev_retry_count": 3, 00:15:37.372 "ctrlr_loss_timeout_sec": 0, 00:15:37.372 "delay_cmd_submit": true, 00:15:37.372 "dhchap_dhgroups": [ 00:15:37.372 "null", 00:15:37.372 "ffdhe2048", 00:15:37.372 "ffdhe3072", 00:15:37.372 "ffdhe4096", 00:15:37.372 "ffdhe6144", 00:15:37.372 "ffdhe8192" 00:15:37.372 ], 00:15:37.372 "dhchap_digests": [ 00:15:37.372 "sha256", 00:15:37.372 "sha384", 00:15:37.372 "sha512" 00:15:37.372 ], 00:15:37.372 "disable_auto_failback": false, 00:15:37.372 "fast_io_fail_timeout_sec": 0, 00:15:37.372 "generate_uuids": false, 00:15:37.372 "high_priority_weight": 0, 00:15:37.372 "io_path_stat": false, 00:15:37.372 "io_queue_requests": 512, 00:15:37.372 "keep_alive_timeout_ms": 10000, 00:15:37.372 "low_priority_weight": 0, 00:15:37.372 "medium_priority_weight": 0, 00:15:37.372 "nvme_adminq_poll_period_us": 10000, 00:15:37.372 "nvme_error_stat": false, 00:15:37.372 "nvme_ioq_poll_period_us": 0, 00:15:37.372 "rdma_cm_event_timeout_ms": 0, 00:15:37.372 "rdma_max_cq_size": 0, 00:15:37.372 "rdma_srq_size": 0, 00:15:37.372 "reconnect_delay_sec": 0, 00:15:37.372 "timeout_admin_us": 0, 00:15:37.372 "timeout_us": 0, 00:15:37.372 "transport_ack_timeout": 0, 00:15:37.372 "transport_retry_count": 4, 00:15:37.372 "transport_tos": 0 00:15:37.372 } 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "method": "bdev_nvme_attach_controller", 00:15:37.372 "params": { 00:15:37.372 "adrfam": "IPv4", 00:15:37.372 "ctrlr_loss_timeout_sec": 0, 00:15:37.372 "ddgst": false, 00:15:37.372 "fast_io_fail_timeout_sec": 0, 00:15:37.372 "hdgst": false, 00:15:37.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.372 "name": "TLSTEST", 00:15:37.372 "prchk_guard": false, 00:15:37.372 "prchk_reftag": false, 00:15:37.372 "psk": "/tmp/tmp.gqH2L2X4E7", 00:15:37.372 "reconnect_delay_sec": 0, 00:15:37.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.372 "traddr": "10.0.0.2", 00:15:37.372 "trsvcid": "4420", 00:15:37.372 "trtype": "TCP" 00:15:37.372 } 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "method": "bdev_nvme_set_hotplug", 00:15:37.372 "params": { 00:15:37.372 "enable": false, 00:15:37.372 "period_us": 100000 00:15:37.372 } 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "method": "bdev_wait_for_examine" 00:15:37.372 } 00:15:37.372 ] 00:15:37.372 }, 00:15:37.372 { 00:15:37.372 "subsystem": "nbd", 00:15:37.372 "config": [] 00:15:37.372 } 00:15:37.372 ] 00:15:37.372 }' 00:15:37.372 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 83963 00:15:37.372 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83963 ']' 00:15:37.372 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83963 00:15:37.372 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:37.372 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.373 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83963 00:15:37.373 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:37.373 killing process with pid 83963 00:15:37.373 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:37.373 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83963' 00:15:37.373 Received shutdown signal, test time was about 10.000000 seconds 00:15:37.373 00:15:37.373 Latency(us) 00:15:37.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.373 =================================================================================================================== 00:15:37.373 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:37.373 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83963 00:15:37.373 [2024-07-25 12:22:52.176011] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:37.373 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83963 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 83860 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83860 ']' 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83860 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83860 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:37.658 killing process with pid 83860 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83860' 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83860 00:15:37.658 [2024-07-25 12:22:52.455768] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:37.658 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83860 00:15:37.917 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:37.917 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:37.917 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:37.917 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:37.917 "subsystems": [ 00:15:37.917 { 00:15:37.917 "subsystem": "keyring", 00:15:37.917 "config": [] 00:15:37.917 }, 00:15:37.917 { 00:15:37.917 "subsystem": "iobuf", 00:15:37.917 "config": [ 00:15:37.917 { 00:15:37.917 "method": "iobuf_set_options", 00:15:37.917 "params": { 00:15:37.917 "large_bufsize": 135168, 00:15:37.917 "large_pool_count": 1024, 00:15:37.917 "small_bufsize": 8192, 00:15:37.917 "small_pool_count": 8192 00:15:37.917 } 00:15:37.917 } 00:15:37.917 ] 00:15:37.917 }, 00:15:37.917 { 00:15:37.917 "subsystem": "sock", 00:15:37.917 "config": [ 00:15:37.917 { 00:15:37.917 "method": "sock_set_default_impl", 00:15:37.917 "params": { 00:15:37.917 "impl_name": "posix" 00:15:37.917 } 00:15:37.917 }, 00:15:37.917 { 00:15:37.917 "method": "sock_impl_set_options", 00:15:37.917 "params": { 00:15:37.917 "enable_ktls": false, 00:15:37.917 "enable_placement_id": 0, 00:15:37.917 "enable_quickack": false, 00:15:37.917 "enable_recv_pipe": true, 00:15:37.917 "enable_zerocopy_send_client": false, 00:15:37.917 "enable_zerocopy_send_server": true, 00:15:37.917 "impl_name": "ssl", 00:15:37.917 "recv_buf_size": 4096, 00:15:37.917 "send_buf_size": 4096, 00:15:37.917 "tls_version": 0, 00:15:37.917 "zerocopy_threshold": 0 00:15:37.917 } 00:15:37.917 }, 00:15:37.917 { 00:15:37.917 "method": "sock_impl_set_options", 00:15:37.917 "params": { 00:15:37.917 "enable_ktls": false, 00:15:37.917 "enable_placement_id": 0, 00:15:37.917 "enable_quickack": false, 00:15:37.917 "enable_recv_pipe": true, 00:15:37.917 "enable_zerocopy_send_client": false, 00:15:37.917 "enable_zerocopy_send_server": true, 00:15:37.917 "impl_name": "posix", 00:15:37.917 "recv_buf_size": 2097152, 00:15:37.917 "send_buf_size": 2097152, 00:15:37.917 "tls_version": 0, 00:15:37.917 "zerocopy_threshold": 0 00:15:37.917 } 00:15:37.917 } 00:15:37.917 ] 00:15:37.917 }, 00:15:37.917 { 00:15:37.917 "subsystem": "vmd", 00:15:37.917 "config": [] 00:15:37.917 }, 00:15:37.917 { 00:15:37.917 "subsystem": "accel", 00:15:37.917 "config": [ 00:15:37.917 { 00:15:37.917 "method": "accel_set_options", 00:15:37.917 "params": { 00:15:37.917 "buf_count": 2048, 00:15:37.917 "large_cache_size": 16, 00:15:37.917 "sequence_count": 2048, 00:15:37.917 "small_cache_size": 128, 00:15:37.917 "task_count": 2048 00:15:37.917 } 00:15:37.917 } 00:15:37.917 ] 00:15:37.917 }, 00:15:37.917 { 00:15:37.917 "subsystem": "bdev", 00:15:37.917 "config": [ 00:15:37.917 { 00:15:37.917 "method": "bdev_set_options", 00:15:37.917 "params": { 00:15:37.917 "bdev_auto_examine": true, 00:15:37.917 "bdev_io_cache_size": 256, 00:15:37.917 "bdev_io_pool_size": 65535, 00:15:37.917 "iobuf_large_cache_size": 16, 00:15:37.917 "iobuf_small_cache_size": 128 00:15:37.917 } 00:15:37.917 }, 00:15:37.917 { 00:15:37.917 "method": "bdev_raid_set_options", 00:15:37.917 "params": { 00:15:37.917 "process_max_bandwidth_mb_sec": 0, 00:15:37.917 "process_window_size_kb": 1024 00:15:37.917 } 00:15:37.917 }, 00:15:37.917 { 00:15:37.917 "method": "bdev_iscsi_set_options", 00:15:37.917 "params": { 00:15:37.917 "timeout_sec": 30 00:15:37.917 } 00:15:37.917 }, 00:15:37.917 { 00:15:37.917 "method": "bdev_nvme_set_options", 00:15:37.917 "params": { 00:15:37.917 "action_on_timeout": "none", 00:15:37.917 "allow_accel_sequence": false, 00:15:37.917 "arbitration_burst": 0, 00:15:37.917 "bdev_retry_count": 3, 00:15:37.917 "ctrlr_loss_timeout_sec": 0, 00:15:37.917 "delay_cmd_submit": true, 00:15:37.917 "dhchap_dhgroups": [ 00:15:37.917 "null", 00:15:37.917 "ffdhe2048", 00:15:37.917 "ffdhe3072", 00:15:37.917 "ffdhe4096", 00:15:37.917 "ffdhe6144", 00:15:37.917 "ffdhe8192" 00:15:37.917 ], 00:15:37.917 "dhchap_digests": [ 00:15:37.917 "sha256", 00:15:37.917 "sha384", 00:15:37.918 "sha512" 00:15:37.918 ], 00:15:37.918 "disable_auto_failback": false, 00:15:37.918 "fast_io_fail_timeout_sec": 0, 00:15:37.918 "generate_uuids": false, 00:15:37.918 "high_priority_weight": 0, 00:15:37.918 "io_path_stat": false, 00:15:37.918 "io_queue_requests": 0, 00:15:37.918 "keep_alive_timeout_ms": 10000, 00:15:37.918 "low_priority_weight": 0, 00:15:37.918 "medium_priority_weight": 0, 00:15:37.918 "nvme_adminq_poll_period_us": 10000, 00:15:37.918 "nvme_error_stat": false, 00:15:37.918 "nvme_ioq_poll_period_us": 0, 00:15:37.918 "rdma_cm_event_timeout_ms": 0, 00:15:37.918 "rdma_max_cq_size": 0, 00:15:37.918 "rdma_srq_size": 0, 00:15:37.918 "reconnect_delay_sec": 0, 00:15:37.918 "timeout_admin_us": 0, 00:15:37.918 "timeout_us": 0, 00:15:37.918 "transport_ack_timeout": 0, 00:15:37.918 "transport_retry_count": 4, 00:15:37.918 "transport_tos": 0 00:15:37.918 } 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "method": "bdev_nvme_set_hotplug", 00:15:37.918 "params": { 00:15:37.918 "enable": false, 00:15:37.918 "period_us": 100000 00:15:37.918 } 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "method": "bdev_malloc_create", 00:15:37.918 "params": { 00:15:37.918 "block_size": 4096, 00:15:37.918 "dif_is_head_of_md": false, 00:15:37.918 "dif_pi_format": 0, 00:15:37.918 "dif_type": 0, 00:15:37.918 "md_size": 0, 00:15:37.918 "name": "malloc0", 00:15:37.918 "num_blocks": 8192, 00:15:37.918 "optimal_io_boundary": 0, 00:15:37.918 "physical_block_size": 4096, 00:15:37.918 "uuid": "80921e4c-59a8-468a-9f35-525c7787e027" 00:15:37.918 } 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "method": "bdev_wait_for_examine" 00:15:37.918 } 00:15:37.918 ] 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "subsystem": "nbd", 00:15:37.918 "config": [] 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "subsystem": "scheduler", 00:15:37.918 "config": [ 00:15:37.918 { 00:15:37.918 "method": "framework_set_scheduler", 00:15:37.918 "params": { 00:15:37.918 "name": "static" 00:15:37.918 } 00:15:37.918 } 00:15:37.918 ] 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "subsystem": "nvmf", 00:15:37.918 "config": [ 00:15:37.918 { 00:15:37.918 "method": "nvmf_set_config", 00:15:37.918 "params": { 00:15:37.918 "admin_cmd_passthru": { 00:15:37.918 "identify_ctrlr": false 00:15:37.918 }, 00:15:37.918 "discovery_filter": "match_any" 00:15:37.918 } 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "method": "nvmf_set_max_subsystems", 00:15:37.918 "params": { 00:15:37.918 "max_subsystems": 1024 00:15:37.918 } 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "method": "nvmf_set_crdt", 00:15:37.918 "params": { 00:15:37.918 "crdt1": 0, 00:15:37.918 "crdt2": 0, 00:15:37.918 "crdt3": 0 00:15:37.918 } 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "method": "nvmf_create_transport", 00:15:37.918 "params": { 00:15:37.918 "abort_timeout_sec": 1, 00:15:37.918 "ack_timeout": 0, 00:15:37.918 "buf_cache_size": 4294967295, 00:15:37.918 "c2h_success": false, 00:15:37.918 "data_wr_pool_size": 0, 00:15:37.918 "dif_insert_or_strip": false, 00:15:37.918 "in_capsule_data_size": 4096, 00:15:37.918 "io_unit_size": 131072, 00:15:37.918 "max_aq_depth": 128, 00:15:37.918 "max_io_qpairs_per_ctrlr": 127, 00:15:37.918 "max_io_size": 131072, 00:15:37.918 "max_queue_depth": 128, 00:15:37.918 "num_shared_buffers": 511, 00:15:37.918 "sock_priority": 0, 00:15:37.918 "trtype": "TCP", 00:15:37.918 "zcopy": false 00:15:37.918 } 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "method": "nvmf_create_subsystem", 00:15:37.918 "params": { 00:15:37.918 "allow_any_host": false, 00:15:37.918 "ana_reporting": false, 00:15:37.918 "max_cntlid": 65519, 00:15:37.918 "max_namespaces": 10, 00:15:37.918 "min_cntlid": 1, 00:15:37.918 "model_number": "SPDK bdev Controller", 00:15:37.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.918 "serial_number": "SPDK00000000000001" 00:15:37.918 } 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "method": "nvmf_subsystem_add_host", 00:15:37.918 "params": { 00:15:37.918 "host": "nqn.2016-06.io.spdk:host1", 00:15:37.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.918 "psk": "/tmp/tmp.gqH2L2X4E7" 00:15:37.918 } 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "method": "nvmf_subsystem_add_ns", 00:15:37.918 "params": { 00:15:37.918 "namespace": { 00:15:37.918 "bdev_name": "malloc0", 00:15:37.918 "nguid": "80921E4C59A8468A9F35525C7787E027", 00:15:37.918 "no_auto_visible": false, 00:15:37.918 "nsid": 1, 00:15:37.918 "uuid": "80921e4c-59a8-468a-9f35-525c7787e027" 00:15:37.918 }, 00:15:37.918 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:37.918 } 00:15:37.918 }, 00:15:37.918 { 00:15:37.918 "method": "nvmf_subsystem_add_listener", 00:15:37.918 "params": { 00:15:37.918 "listen_address": { 00:15:37.918 "adrfam": "IPv4", 00:15:37.918 "traddr": "10.0.0.2", 00:15:37.918 "trsvcid": "4420", 00:15:37.918 "trtype": "TCP" 00:15:37.918 }, 00:15:37.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.918 "secure_channel": true 00:15:37.918 } 00:15:37.918 } 00:15:37.918 ] 00:15:37.918 } 00:15:37.918 ] 00:15:37.918 }' 00:15:37.918 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.918 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84040 00:15:37.918 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:37.918 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84040 00:15:37.918 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84040 ']' 00:15:37.919 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.919 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.919 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.919 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.919 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.178 [2024-07-25 12:22:52.785708] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:38.178 [2024-07-25 12:22:52.785807] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.178 [2024-07-25 12:22:52.926425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.178 [2024-07-25 12:22:53.040724] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.178 [2024-07-25 12:22:53.040789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.178 [2024-07-25 12:22:53.040801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.178 [2024-07-25 12:22:53.040810] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.178 [2024-07-25 12:22:53.040817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.178 [2024-07-25 12:22:53.040918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.436 [2024-07-25 12:22:53.273521] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.436 [2024-07-25 12:22:53.289407] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:38.694 [2024-07-25 12:22:53.305440] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:38.694 [2024-07-25 12:22:53.305695] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84080 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84080 /var/tmp/bdevperf.sock 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84080 ']' 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:38.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:38.952 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:38.952 "subsystems": [ 00:15:38.952 { 00:15:38.952 "subsystem": "keyring", 00:15:38.952 "config": [] 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "subsystem": "iobuf", 00:15:38.952 "config": [ 00:15:38.952 { 00:15:38.952 "method": "iobuf_set_options", 00:15:38.952 "params": { 00:15:38.952 "large_bufsize": 135168, 00:15:38.952 "large_pool_count": 1024, 00:15:38.952 "small_bufsize": 8192, 00:15:38.952 "small_pool_count": 8192 00:15:38.952 } 00:15:38.952 } 00:15:38.952 ] 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "subsystem": "sock", 00:15:38.952 "config": [ 00:15:38.952 { 00:15:38.952 "method": "sock_set_default_impl", 00:15:38.952 "params": { 00:15:38.952 "impl_name": "posix" 00:15:38.952 } 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "method": "sock_impl_set_options", 00:15:38.952 "params": { 00:15:38.952 "enable_ktls": false, 00:15:38.952 "enable_placement_id": 0, 00:15:38.952 "enable_quickack": false, 00:15:38.952 "enable_recv_pipe": true, 00:15:38.952 "enable_zerocopy_send_client": false, 00:15:38.952 "enable_zerocopy_send_server": true, 00:15:38.952 "impl_name": "ssl", 00:15:38.952 "recv_buf_size": 4096, 00:15:38.952 "send_buf_size": 4096, 00:15:38.952 "tls_version": 0, 00:15:38.952 "zerocopy_threshold": 0 00:15:38.952 } 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "method": "sock_impl_set_options", 00:15:38.952 "params": { 00:15:38.952 "enable_ktls": false, 00:15:38.952 "enable_placement_id": 0, 00:15:38.952 "enable_quickack": false, 00:15:38.952 "enable_recv_pipe": true, 00:15:38.952 "enable_zerocopy_send_client": false, 00:15:38.952 "enable_zerocopy_send_server": true, 00:15:38.952 "impl_name": "posix", 00:15:38.952 "recv_buf_size": 2097152, 00:15:38.952 "send_buf_size": 2097152, 00:15:38.952 "tls_version": 0, 00:15:38.952 "zerocopy_threshold": 0 00:15:38.952 } 00:15:38.952 } 00:15:38.952 ] 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "subsystem": "vmd", 00:15:38.952 "config": [] 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "subsystem": "accel", 00:15:38.952 "config": [ 00:15:38.952 { 00:15:38.952 "method": "accel_set_options", 00:15:38.952 "params": { 00:15:38.952 "buf_count": 2048, 00:15:38.952 "large_cache_size": 16, 00:15:38.952 "sequence_count": 2048, 00:15:38.952 "small_cache_size": 128, 00:15:38.952 "task_count": 2048 00:15:38.952 } 00:15:38.952 } 00:15:38.952 ] 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "subsystem": "bdev", 00:15:38.952 "config": [ 00:15:38.952 { 00:15:38.952 "method": "bdev_set_options", 00:15:38.952 "params": { 00:15:38.952 "bdev_auto_examine": true, 00:15:38.952 "bdev_io_cache_size": 256, 00:15:38.952 "bdev_io_pool_size": 65535, 00:15:38.952 "iobuf_large_cache_size": 16, 00:15:38.952 "iobuf_small_cache_size": 128 00:15:38.952 } 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "method": "bdev_raid_set_options", 00:15:38.952 "params": { 00:15:38.952 "process_max_bandwidth_mb_sec": 0, 00:15:38.952 "process_window_size_kb": 1024 00:15:38.952 } 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "method": "bdev_iscsi_set_options", 00:15:38.952 "params": { 00:15:38.952 "timeout_sec": 30 00:15:38.952 } 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "method": "bdev_nvme_set_options", 00:15:38.952 "params": { 00:15:38.952 "action_on_timeout": "none", 00:15:38.952 "allow_accel_sequence": false, 00:15:38.952 "arbitration_burst": 0, 00:15:38.952 "bdev_retry_count": 3, 00:15:38.952 "ctrlr_loss_timeout_sec": 0, 00:15:38.952 "delay_cmd_submit": true, 00:15:38.952 "dhchap_dhgroups": [ 00:15:38.952 "null", 00:15:38.952 "ffdhe2048", 00:15:38.952 "ffdhe3072", 00:15:38.952 "ffdhe4096", 00:15:38.952 "ffdhe6144", 00:15:38.952 "ffdhe8192" 00:15:38.952 ], 00:15:38.952 "dhchap_digests": [ 00:15:38.952 "sha256", 00:15:38.952 "sha384", 00:15:38.952 "sha512" 00:15:38.952 ], 00:15:38.952 "disable_auto_failback": false, 00:15:38.952 "fast_io_fail_timeout_sec": 0, 00:15:38.952 "generate_uuids": false, 00:15:38.952 "high_priority_weight": 0, 00:15:38.952 "io_path_stat": false, 00:15:38.952 "io_queue_requests": 512, 00:15:38.952 "keep_alive_timeout_ms": 10000, 00:15:38.952 "low_priority_weight": 0, 00:15:38.952 "medium_priority_weight": 0, 00:15:38.952 "nvme_adminq_poll_period_us": 10000, 00:15:38.952 "nvme_error_stat": false, 00:15:38.952 "nvme_ioq_poll_period_us": 0, 00:15:38.952 "rdma_cm_event_timeout_ms": 0, 00:15:38.952 "rdma_max_cq_size": 0, 00:15:38.952 "rdma_srq_size": 0, 00:15:38.952 "reconnect_delay_sec": 0, 00:15:38.952 "timeout_admin_us": 0, 00:15:38.952 "timeout_us": 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:38.952 0, 00:15:38.952 "transport_ack_timeout": 0, 00:15:38.952 "transport_retry_count": 4, 00:15:38.952 "transport_tos": 0 00:15:38.952 } 00:15:38.952 }, 00:15:38.952 { 00:15:38.952 "method": "bdev_nvme_attach_controller", 00:15:38.952 "params": { 00:15:38.952 "adrfam": "IPv4", 00:15:38.952 "ctrlr_loss_timeout_sec": 0, 00:15:38.952 "ddgst": false, 00:15:38.953 "fast_io_fail_timeout_sec": 0, 00:15:38.953 "hdgst": false, 00:15:38.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.953 "name": "TLSTEST", 00:15:38.953 "prchk_guard": false, 00:15:38.953 "prchk_reftag": false, 00:15:38.953 "psk": "/tmp/tmp.gqH2L2X4E7", 00:15:38.953 "reconnect_delay_sec": 0, 00:15:38.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.953 "traddr": "10.0.0.2", 00:15:38.953 "trsvcid": "4420", 00:15:38.953 "trtype": "TCP" 00:15:38.953 } 00:15:38.953 }, 00:15:38.953 { 00:15:38.953 "method": "bdev_nvme_set_hotplug", 00:15:38.953 "params": { 00:15:38.953 "enable": false, 00:15:38.953 "period_us": 100000 00:15:38.953 } 00:15:38.953 }, 00:15:38.953 { 00:15:38.953 "method": "bdev_wait_for_examine" 00:15:38.953 } 00:15:38.953 ] 00:15:38.953 }, 00:15:38.953 { 00:15:38.953 "subsystem": "nbd", 00:15:38.953 "config": [] 00:15:38.953 } 00:15:38.953 ] 00:15:38.953 }' 00:15:38.953 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:38.953 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.211 [2024-07-25 12:22:53.847897] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:39.211 [2024-07-25 12:22:53.847975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84080 ] 00:15:39.211 [2024-07-25 12:22:53.981990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.469 [2024-07-25 12:22:54.102927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.469 [2024-07-25 12:22:54.275746] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:39.469 [2024-07-25 12:22:54.275874] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:40.035 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.035 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:40.035 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:40.293 Running I/O for 10 seconds... 00:15:50.264 00:15:50.264 Latency(us) 00:15:50.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.264 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:50.264 Verification LBA range: start 0x0 length 0x2000 00:15:50.264 TLSTESTn1 : 10.02 3718.69 14.53 0.00 0.00 34346.55 2308.65 24903.68 00:15:50.264 =================================================================================================================== 00:15:50.264 Total : 3718.69 14.53 0.00 0.00 34346.55 2308.65 24903.68 00:15:50.264 0 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 84080 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84080 ']' 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84080 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84080 00:15:50.264 killing process with pid 84080 00:15:50.264 Received shutdown signal, test time was about 10.000000 seconds 00:15:50.264 00:15:50.264 Latency(us) 00:15:50.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.264 =================================================================================================================== 00:15:50.264 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84080' 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84080 00:15:50.264 [2024-07-25 12:23:05.038654] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:50.264 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84080 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 84040 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84040 ']' 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84040 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84040 00:15:50.524 killing process with pid 84040 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84040' 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84040 00:15:50.524 [2024-07-25 12:23:05.298888] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:50.524 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84040 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:50.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84231 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84231 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84231 ']' 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.782 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:50.782 [2024-07-25 12:23:05.620146] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:50.782 [2024-07-25 12:23:05.620437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.040 [2024-07-25 12:23:05.761270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.040 [2024-07-25 12:23:05.883123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.040 [2024-07-25 12:23:05.883519] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.040 [2024-07-25 12:23:05.883685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.040 [2024-07-25 12:23:05.883832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.040 [2024-07-25 12:23:05.884030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.040 [2024-07-25 12:23:05.884110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.972 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.972 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:51.972 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.972 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:51.972 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.972 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.972 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.gqH2L2X4E7 00:15:51.972 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gqH2L2X4E7 00:15:51.972 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:52.230 [2024-07-25 12:23:06.938751] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.230 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:52.488 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:52.746 [2024-07-25 12:23:07.459030] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:52.746 [2024-07-25 12:23:07.459263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.746 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:53.004 malloc0 00:15:53.004 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:53.262 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gqH2L2X4E7 00:15:53.521 [2024-07-25 12:23:08.235936] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:53.521 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=84333 00:15:53.521 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:53.521 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:53.521 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 84333 /var/tmp/bdevperf.sock 00:15:53.521 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84333 ']' 00:15:53.521 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:53.521 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:53.521 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:53.521 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.521 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.521 [2024-07-25 12:23:08.300745] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:53.521 [2024-07-25 12:23:08.300846] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84333 ] 00:15:53.802 [2024-07-25 12:23:08.439018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.802 [2024-07-25 12:23:08.559799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.737 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.737 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:54.737 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gqH2L2X4E7 00:15:54.737 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:54.996 [2024-07-25 12:23:09.746491] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:54.996 nvme0n1 00:15:54.996 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:55.254 Running I/O for 1 seconds... 00:15:56.189 00:15:56.189 Latency(us) 00:15:56.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.189 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:56.189 Verification LBA range: start 0x0 length 0x2000 00:15:56.189 nvme0n1 : 1.02 3796.37 14.83 0.00 0.00 33227.04 4855.62 21805.61 00:15:56.189 =================================================================================================================== 00:15:56.189 Total : 3796.37 14.83 0.00 0.00 33227.04 4855.62 21805.61 00:15:56.189 0 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 84333 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84333 ']' 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84333 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84333 00:15:56.189 killing process with pid 84333 00:15:56.189 Received shutdown signal, test time was about 1.000000 seconds 00:15:56.189 00:15:56.189 Latency(us) 00:15:56.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.189 =================================================================================================================== 00:15:56.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84333' 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84333 00:15:56.189 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84333 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 84231 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84231 ']' 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84231 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84231 00:15:56.448 killing process with pid 84231 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84231' 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84231 00:15:56.448 [2024-07-25 12:23:11.293870] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:56.448 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84231 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84414 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84414 00:15:56.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84414 ']' 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.706 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.965 [2024-07-25 12:23:11.626194] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:56.965 [2024-07-25 12:23:11.626312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.965 [2024-07-25 12:23:11.767933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.223 [2024-07-25 12:23:11.884920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.223 [2024-07-25 12:23:11.884972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.223 [2024-07-25 12:23:11.884984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.223 [2024-07-25 12:23:11.884993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.223 [2024-07-25 12:23:11.885000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.223 [2024-07-25 12:23:11.885027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.158 [2024-07-25 12:23:12.712710] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.158 malloc0 00:15:58.158 [2024-07-25 12:23:12.744861] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:58.158 [2024-07-25 12:23:12.745073] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=84464 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 84464 /var/tmp/bdevperf.sock 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84464 ']' 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.158 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.158 [2024-07-25 12:23:12.837400] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:15:58.158 [2024-07-25 12:23:12.837509] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84464 ] 00:15:58.158 [2024-07-25 12:23:12.978468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.416 [2024-07-25 12:23:13.098755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.347 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.347 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:59.347 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gqH2L2X4E7 00:15:59.347 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:59.605 [2024-07-25 12:23:14.390559] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:59.605 nvme0n1 00:15:59.862 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:59.862 Running I/O for 1 seconds... 00:16:00.796 00:16:00.796 Latency(us) 00:16:00.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.796 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.796 Verification LBA range: start 0x0 length 0x2000 00:16:00.796 nvme0n1 : 1.02 3952.61 15.44 0.00 0.00 32063.01 5689.72 26571.87 00:16:00.796 =================================================================================================================== 00:16:00.796 Total : 3952.61 15.44 0.00 0.00 32063.01 5689.72 26571.87 00:16:00.796 0 00:16:00.796 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:16:00.796 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.796 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:01.054 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.054 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:16:01.054 "subsystems": [ 00:16:01.054 { 00:16:01.054 "subsystem": "keyring", 00:16:01.054 "config": [ 00:16:01.054 { 00:16:01.054 "method": "keyring_file_add_key", 00:16:01.054 "params": { 00:16:01.054 "name": "key0", 00:16:01.054 "path": "/tmp/tmp.gqH2L2X4E7" 00:16:01.054 } 00:16:01.054 } 00:16:01.054 ] 00:16:01.054 }, 00:16:01.054 { 00:16:01.054 "subsystem": "iobuf", 00:16:01.054 "config": [ 00:16:01.054 { 00:16:01.054 "method": "iobuf_set_options", 00:16:01.054 "params": { 00:16:01.054 "large_bufsize": 135168, 00:16:01.054 "large_pool_count": 1024, 00:16:01.054 "small_bufsize": 8192, 00:16:01.054 "small_pool_count": 8192 00:16:01.054 } 00:16:01.054 } 00:16:01.054 ] 00:16:01.054 }, 00:16:01.054 { 00:16:01.054 "subsystem": "sock", 00:16:01.054 "config": [ 00:16:01.054 { 00:16:01.054 "method": "sock_set_default_impl", 00:16:01.054 "params": { 00:16:01.054 "impl_name": "posix" 00:16:01.054 } 00:16:01.054 }, 00:16:01.054 { 00:16:01.054 "method": "sock_impl_set_options", 00:16:01.054 "params": { 00:16:01.054 "enable_ktls": false, 00:16:01.054 "enable_placement_id": 0, 00:16:01.054 "enable_quickack": false, 00:16:01.054 "enable_recv_pipe": true, 00:16:01.054 "enable_zerocopy_send_client": false, 00:16:01.054 "enable_zerocopy_send_server": true, 00:16:01.054 "impl_name": "ssl", 00:16:01.054 "recv_buf_size": 4096, 00:16:01.054 "send_buf_size": 4096, 00:16:01.054 "tls_version": 0, 00:16:01.054 "zerocopy_threshold": 0 00:16:01.054 } 00:16:01.054 }, 00:16:01.054 { 00:16:01.054 "method": "sock_impl_set_options", 00:16:01.054 "params": { 00:16:01.054 "enable_ktls": false, 00:16:01.054 "enable_placement_id": 0, 00:16:01.054 "enable_quickack": false, 00:16:01.054 "enable_recv_pipe": true, 00:16:01.054 "enable_zerocopy_send_client": false, 00:16:01.054 "enable_zerocopy_send_server": true, 00:16:01.054 "impl_name": "posix", 00:16:01.054 "recv_buf_size": 2097152, 00:16:01.054 "send_buf_size": 2097152, 00:16:01.054 "tls_version": 0, 00:16:01.054 "zerocopy_threshold": 0 00:16:01.054 } 00:16:01.054 } 00:16:01.054 ] 00:16:01.054 }, 00:16:01.054 { 00:16:01.054 "subsystem": "vmd", 00:16:01.054 "config": [] 00:16:01.054 }, 00:16:01.054 { 00:16:01.054 "subsystem": "accel", 00:16:01.054 "config": [ 00:16:01.054 { 00:16:01.054 "method": "accel_set_options", 00:16:01.054 "params": { 00:16:01.054 "buf_count": 2048, 00:16:01.054 "large_cache_size": 16, 00:16:01.054 "sequence_count": 2048, 00:16:01.054 "small_cache_size": 128, 00:16:01.054 "task_count": 2048 00:16:01.054 } 00:16:01.054 } 00:16:01.054 ] 00:16:01.054 }, 00:16:01.054 { 00:16:01.054 "subsystem": "bdev", 00:16:01.054 "config": [ 00:16:01.054 { 00:16:01.054 "method": "bdev_set_options", 00:16:01.054 "params": { 00:16:01.054 "bdev_auto_examine": true, 00:16:01.054 "bdev_io_cache_size": 256, 00:16:01.054 "bdev_io_pool_size": 65535, 00:16:01.054 "iobuf_large_cache_size": 16, 00:16:01.054 "iobuf_small_cache_size": 128 00:16:01.054 } 00:16:01.054 }, 00:16:01.054 { 00:16:01.054 "method": "bdev_raid_set_options", 00:16:01.054 "params": { 00:16:01.054 "process_max_bandwidth_mb_sec": 0, 00:16:01.054 "process_window_size_kb": 1024 00:16:01.054 } 00:16:01.054 }, 00:16:01.054 { 00:16:01.055 "method": "bdev_iscsi_set_options", 00:16:01.055 "params": { 00:16:01.055 "timeout_sec": 30 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "bdev_nvme_set_options", 00:16:01.055 "params": { 00:16:01.055 "action_on_timeout": "none", 00:16:01.055 "allow_accel_sequence": false, 00:16:01.055 "arbitration_burst": 0, 00:16:01.055 "bdev_retry_count": 3, 00:16:01.055 "ctrlr_loss_timeout_sec": 0, 00:16:01.055 "delay_cmd_submit": true, 00:16:01.055 "dhchap_dhgroups": [ 00:16:01.055 "null", 00:16:01.055 "ffdhe2048", 00:16:01.055 "ffdhe3072", 00:16:01.055 "ffdhe4096", 00:16:01.055 "ffdhe6144", 00:16:01.055 "ffdhe8192" 00:16:01.055 ], 00:16:01.055 "dhchap_digests": [ 00:16:01.055 "sha256", 00:16:01.055 "sha384", 00:16:01.055 "sha512" 00:16:01.055 ], 00:16:01.055 "disable_auto_failback": false, 00:16:01.055 "fast_io_fail_timeout_sec": 0, 00:16:01.055 "generate_uuids": false, 00:16:01.055 "high_priority_weight": 0, 00:16:01.055 "io_path_stat": false, 00:16:01.055 "io_queue_requests": 0, 00:16:01.055 "keep_alive_timeout_ms": 10000, 00:16:01.055 "low_priority_weight": 0, 00:16:01.055 "medium_priority_weight": 0, 00:16:01.055 "nvme_adminq_poll_period_us": 10000, 00:16:01.055 "nvme_error_stat": false, 00:16:01.055 "nvme_ioq_poll_period_us": 0, 00:16:01.055 "rdma_cm_event_timeout_ms": 0, 00:16:01.055 "rdma_max_cq_size": 0, 00:16:01.055 "rdma_srq_size": 0, 00:16:01.055 "reconnect_delay_sec": 0, 00:16:01.055 "timeout_admin_us": 0, 00:16:01.055 "timeout_us": 0, 00:16:01.055 "transport_ack_timeout": 0, 00:16:01.055 "transport_retry_count": 4, 00:16:01.055 "transport_tos": 0 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "bdev_nvme_set_hotplug", 00:16:01.055 "params": { 00:16:01.055 "enable": false, 00:16:01.055 "period_us": 100000 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "bdev_malloc_create", 00:16:01.055 "params": { 00:16:01.055 "block_size": 4096, 00:16:01.055 "dif_is_head_of_md": false, 00:16:01.055 "dif_pi_format": 0, 00:16:01.055 "dif_type": 0, 00:16:01.055 "md_size": 0, 00:16:01.055 "name": "malloc0", 00:16:01.055 "num_blocks": 8192, 00:16:01.055 "optimal_io_boundary": 0, 00:16:01.055 "physical_block_size": 4096, 00:16:01.055 "uuid": "54e51564-21a9-4119-9275-971bc2bcd002" 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "bdev_wait_for_examine" 00:16:01.055 } 00:16:01.055 ] 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "subsystem": "nbd", 00:16:01.055 "config": [] 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "subsystem": "scheduler", 00:16:01.055 "config": [ 00:16:01.055 { 00:16:01.055 "method": "framework_set_scheduler", 00:16:01.055 "params": { 00:16:01.055 "name": "static" 00:16:01.055 } 00:16:01.055 } 00:16:01.055 ] 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "subsystem": "nvmf", 00:16:01.055 "config": [ 00:16:01.055 { 00:16:01.055 "method": "nvmf_set_config", 00:16:01.055 "params": { 00:16:01.055 "admin_cmd_passthru": { 00:16:01.055 "identify_ctrlr": false 00:16:01.055 }, 00:16:01.055 "discovery_filter": "match_any" 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "nvmf_set_max_subsystems", 00:16:01.055 "params": { 00:16:01.055 "max_subsystems": 1024 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "nvmf_set_crdt", 00:16:01.055 "params": { 00:16:01.055 "crdt1": 0, 00:16:01.055 "crdt2": 0, 00:16:01.055 "crdt3": 0 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "nvmf_create_transport", 00:16:01.055 "params": { 00:16:01.055 "abort_timeout_sec": 1, 00:16:01.055 "ack_timeout": 0, 00:16:01.055 "buf_cache_size": 4294967295, 00:16:01.055 "c2h_success": false, 00:16:01.055 "data_wr_pool_size": 0, 00:16:01.055 "dif_insert_or_strip": false, 00:16:01.055 "in_capsule_data_size": 4096, 00:16:01.055 "io_unit_size": 131072, 00:16:01.055 "max_aq_depth": 128, 00:16:01.055 "max_io_qpairs_per_ctrlr": 127, 00:16:01.055 "max_io_size": 131072, 00:16:01.055 "max_queue_depth": 128, 00:16:01.055 "num_shared_buffers": 511, 00:16:01.055 "sock_priority": 0, 00:16:01.055 "trtype": "TCP", 00:16:01.055 "zcopy": false 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "nvmf_create_subsystem", 00:16:01.055 "params": { 00:16:01.055 "allow_any_host": false, 00:16:01.055 "ana_reporting": false, 00:16:01.055 "max_cntlid": 65519, 00:16:01.055 "max_namespaces": 32, 00:16:01.055 "min_cntlid": 1, 00:16:01.055 "model_number": "SPDK bdev Controller", 00:16:01.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.055 "serial_number": "00000000000000000000" 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "nvmf_subsystem_add_host", 00:16:01.055 "params": { 00:16:01.055 "host": "nqn.2016-06.io.spdk:host1", 00:16:01.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.055 "psk": "key0" 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "nvmf_subsystem_add_ns", 00:16:01.055 "params": { 00:16:01.055 "namespace": { 00:16:01.055 "bdev_name": "malloc0", 00:16:01.055 "nguid": "54E5156421A941199275971BC2BCD002", 00:16:01.055 "no_auto_visible": false, 00:16:01.055 "nsid": 1, 00:16:01.055 "uuid": "54e51564-21a9-4119-9275-971bc2bcd002" 00:16:01.055 }, 00:16:01.055 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:01.055 } 00:16:01.055 }, 00:16:01.055 { 00:16:01.055 "method": "nvmf_subsystem_add_listener", 00:16:01.055 "params": { 00:16:01.055 "listen_address": { 00:16:01.055 "adrfam": "IPv4", 00:16:01.055 "traddr": "10.0.0.2", 00:16:01.055 "trsvcid": "4420", 00:16:01.055 "trtype": "TCP" 00:16:01.055 }, 00:16:01.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.055 "secure_channel": false, 00:16:01.055 "sock_impl": "ssl" 00:16:01.055 } 00:16:01.055 } 00:16:01.055 ] 00:16:01.055 } 00:16:01.055 ] 00:16:01.055 }' 00:16:01.055 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:01.314 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:16:01.314 "subsystems": [ 00:16:01.314 { 00:16:01.314 "subsystem": "keyring", 00:16:01.314 "config": [ 00:16:01.314 { 00:16:01.314 "method": "keyring_file_add_key", 00:16:01.314 "params": { 00:16:01.314 "name": "key0", 00:16:01.314 "path": "/tmp/tmp.gqH2L2X4E7" 00:16:01.314 } 00:16:01.314 } 00:16:01.314 ] 00:16:01.314 }, 00:16:01.314 { 00:16:01.314 "subsystem": "iobuf", 00:16:01.314 "config": [ 00:16:01.314 { 00:16:01.314 "method": "iobuf_set_options", 00:16:01.314 "params": { 00:16:01.314 "large_bufsize": 135168, 00:16:01.314 "large_pool_count": 1024, 00:16:01.314 "small_bufsize": 8192, 00:16:01.314 "small_pool_count": 8192 00:16:01.314 } 00:16:01.314 } 00:16:01.314 ] 00:16:01.314 }, 00:16:01.314 { 00:16:01.314 "subsystem": "sock", 00:16:01.314 "config": [ 00:16:01.314 { 00:16:01.314 "method": "sock_set_default_impl", 00:16:01.314 "params": { 00:16:01.314 "impl_name": "posix" 00:16:01.314 } 00:16:01.314 }, 00:16:01.314 { 00:16:01.314 "method": "sock_impl_set_options", 00:16:01.314 "params": { 00:16:01.314 "enable_ktls": false, 00:16:01.314 "enable_placement_id": 0, 00:16:01.314 "enable_quickack": false, 00:16:01.314 "enable_recv_pipe": true, 00:16:01.314 "enable_zerocopy_send_client": false, 00:16:01.314 "enable_zerocopy_send_server": true, 00:16:01.314 "impl_name": "ssl", 00:16:01.314 "recv_buf_size": 4096, 00:16:01.314 "send_buf_size": 4096, 00:16:01.314 "tls_version": 0, 00:16:01.314 "zerocopy_threshold": 0 00:16:01.314 } 00:16:01.314 }, 00:16:01.314 { 00:16:01.314 "method": "sock_impl_set_options", 00:16:01.314 "params": { 00:16:01.314 "enable_ktls": false, 00:16:01.314 "enable_placement_id": 0, 00:16:01.314 "enable_quickack": false, 00:16:01.314 "enable_recv_pipe": true, 00:16:01.314 "enable_zerocopy_send_client": false, 00:16:01.314 "enable_zerocopy_send_server": true, 00:16:01.314 "impl_name": "posix", 00:16:01.314 "recv_buf_size": 2097152, 00:16:01.314 "send_buf_size": 2097152, 00:16:01.314 "tls_version": 0, 00:16:01.314 "zerocopy_threshold": 0 00:16:01.314 } 00:16:01.314 } 00:16:01.314 ] 00:16:01.314 }, 00:16:01.314 { 00:16:01.314 "subsystem": "vmd", 00:16:01.314 "config": [] 00:16:01.314 }, 00:16:01.314 { 00:16:01.314 "subsystem": "accel", 00:16:01.314 "config": [ 00:16:01.314 { 00:16:01.314 "method": "accel_set_options", 00:16:01.314 "params": { 00:16:01.314 "buf_count": 2048, 00:16:01.314 "large_cache_size": 16, 00:16:01.314 "sequence_count": 2048, 00:16:01.314 "small_cache_size": 128, 00:16:01.314 "task_count": 2048 00:16:01.314 } 00:16:01.314 } 00:16:01.314 ] 00:16:01.314 }, 00:16:01.314 { 00:16:01.314 "subsystem": "bdev", 00:16:01.314 "config": [ 00:16:01.314 { 00:16:01.314 "method": "bdev_set_options", 00:16:01.314 "params": { 00:16:01.314 "bdev_auto_examine": true, 00:16:01.314 "bdev_io_cache_size": 256, 00:16:01.314 "bdev_io_pool_size": 65535, 00:16:01.314 "iobuf_large_cache_size": 16, 00:16:01.314 "iobuf_small_cache_size": 128 00:16:01.314 } 00:16:01.314 }, 00:16:01.315 { 00:16:01.315 "method": "bdev_raid_set_options", 00:16:01.315 "params": { 00:16:01.315 "process_max_bandwidth_mb_sec": 0, 00:16:01.315 "process_window_size_kb": 1024 00:16:01.315 } 00:16:01.315 }, 00:16:01.315 { 00:16:01.315 "method": "bdev_iscsi_set_options", 00:16:01.315 "params": { 00:16:01.315 "timeout_sec": 30 00:16:01.315 } 00:16:01.315 }, 00:16:01.315 { 00:16:01.315 "method": "bdev_nvme_set_options", 00:16:01.315 "params": { 00:16:01.315 "action_on_timeout": "none", 00:16:01.315 "allow_accel_sequence": false, 00:16:01.315 "arbitration_burst": 0, 00:16:01.315 "bdev_retry_count": 3, 00:16:01.315 "ctrlr_loss_timeout_sec": 0, 00:16:01.315 "delay_cmd_submit": true, 00:16:01.315 "dhchap_dhgroups": [ 00:16:01.315 "null", 00:16:01.315 "ffdhe2048", 00:16:01.315 "ffdhe3072", 00:16:01.315 "ffdhe4096", 00:16:01.315 "ffdhe6144", 00:16:01.315 "ffdhe8192" 00:16:01.315 ], 00:16:01.315 "dhchap_digests": [ 00:16:01.315 "sha256", 00:16:01.315 "sha384", 00:16:01.315 "sha512" 00:16:01.315 ], 00:16:01.315 "disable_auto_failback": false, 00:16:01.315 "fast_io_fail_timeout_sec": 0, 00:16:01.315 "generate_uuids": false, 00:16:01.315 "high_priority_weight": 0, 00:16:01.315 "io_path_stat": false, 00:16:01.315 "io_queue_requests": 512, 00:16:01.315 "keep_alive_timeout_ms": 10000, 00:16:01.315 "low_priority_weight": 0, 00:16:01.315 "medium_priority_weight": 0, 00:16:01.315 "nvme_adminq_poll_period_us": 10000, 00:16:01.315 "nvme_error_stat": false, 00:16:01.315 "nvme_ioq_poll_period_us": 0, 00:16:01.315 "rdma_cm_event_timeout_ms": 0, 00:16:01.315 "rdma_max_cq_size": 0, 00:16:01.315 "rdma_srq_size": 0, 00:16:01.315 "reconnect_delay_sec": 0, 00:16:01.315 "timeout_admin_us": 0, 00:16:01.315 "timeout_us": 0, 00:16:01.315 "transport_ack_timeout": 0, 00:16:01.315 "transport_retry_count": 4, 00:16:01.315 "transport_tos": 0 00:16:01.315 } 00:16:01.315 }, 00:16:01.315 { 00:16:01.315 "method": "bdev_nvme_attach_controller", 00:16:01.315 "params": { 00:16:01.315 "adrfam": "IPv4", 00:16:01.315 "ctrlr_loss_timeout_sec": 0, 00:16:01.315 "ddgst": false, 00:16:01.315 "fast_io_fail_timeout_sec": 0, 00:16:01.315 "hdgst": false, 00:16:01.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:01.315 "name": "nvme0", 00:16:01.315 "prchk_guard": false, 00:16:01.315 "prchk_reftag": false, 00:16:01.315 "psk": "key0", 00:16:01.315 "reconnect_delay_sec": 0, 00:16:01.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.315 "traddr": "10.0.0.2", 00:16:01.315 "trsvcid": "4420", 00:16:01.315 "trtype": "TCP" 00:16:01.315 } 00:16:01.315 }, 00:16:01.315 { 00:16:01.315 "method": "bdev_nvme_set_hotplug", 00:16:01.315 "params": { 00:16:01.315 "enable": false, 00:16:01.315 "period_us": 100000 00:16:01.315 } 00:16:01.315 }, 00:16:01.315 { 00:16:01.315 "method": "bdev_enable_histogram", 00:16:01.315 "params": { 00:16:01.315 "enable": true, 00:16:01.315 "name": "nvme0n1" 00:16:01.315 } 00:16:01.315 }, 00:16:01.315 { 00:16:01.315 "method": "bdev_wait_for_examine" 00:16:01.315 } 00:16:01.315 ] 00:16:01.315 }, 00:16:01.315 { 00:16:01.315 "subsystem": "nbd", 00:16:01.315 "config": [] 00:16:01.315 } 00:16:01.315 ] 00:16:01.315 }' 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 84464 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84464 ']' 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84464 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84464 00:16:01.315 killing process with pid 84464 00:16:01.315 Received shutdown signal, test time was about 1.000000 seconds 00:16:01.315 00:16:01.315 Latency(us) 00:16:01.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.315 =================================================================================================================== 00:16:01.315 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84464' 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84464 00:16:01.315 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84464 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 84414 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84414 ']' 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84414 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84414 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.573 killing process with pid 84414 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84414' 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84414 00:16:01.573 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84414 00:16:01.832 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:16:01.832 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.832 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:01.832 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:16:01.832 "subsystems": [ 00:16:01.832 { 00:16:01.832 "subsystem": "keyring", 00:16:01.832 "config": [ 00:16:01.832 { 00:16:01.832 "method": "keyring_file_add_key", 00:16:01.832 "params": { 00:16:01.832 "name": "key0", 00:16:01.832 "path": "/tmp/tmp.gqH2L2X4E7" 00:16:01.832 } 00:16:01.832 } 00:16:01.832 ] 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "subsystem": "iobuf", 00:16:01.832 "config": [ 00:16:01.832 { 00:16:01.832 "method": "iobuf_set_options", 00:16:01.832 "params": { 00:16:01.832 "large_bufsize": 135168, 00:16:01.832 "large_pool_count": 1024, 00:16:01.832 "small_bufsize": 8192, 00:16:01.832 "small_pool_count": 8192 00:16:01.832 } 00:16:01.832 } 00:16:01.832 ] 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "subsystem": "sock", 00:16:01.832 "config": [ 00:16:01.832 { 00:16:01.832 "method": "sock_set_default_impl", 00:16:01.832 "params": { 00:16:01.832 "impl_name": "posix" 00:16:01.832 } 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "method": "sock_impl_set_options", 00:16:01.832 "params": { 00:16:01.832 "enable_ktls": false, 00:16:01.832 "enable_placement_id": 0, 00:16:01.832 "enable_quickack": false, 00:16:01.832 "enable_recv_pipe": true, 00:16:01.832 "enable_zerocopy_send_client": false, 00:16:01.832 "enable_zerocopy_send_server": true, 00:16:01.832 "impl_name": "ssl", 00:16:01.832 "recv_buf_size": 4096, 00:16:01.832 "send_buf_size": 4096, 00:16:01.832 "tls_version": 0, 00:16:01.832 "zerocopy_threshold": 0 00:16:01.832 } 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "method": "sock_impl_set_options", 00:16:01.832 "params": { 00:16:01.832 "enable_ktls": false, 00:16:01.832 "enable_placement_id": 0, 00:16:01.832 "enable_quickack": false, 00:16:01.832 "enable_recv_pipe": true, 00:16:01.832 "enable_zerocopy_send_client": false, 00:16:01.832 "enable_zerocopy_send_server": true, 00:16:01.832 "impl_name": "posix", 00:16:01.832 "recv_buf_size": 2097152, 00:16:01.832 "send_buf_size": 2097152, 00:16:01.832 "tls_version": 0, 00:16:01.832 "zerocopy_threshold": 0 00:16:01.832 } 00:16:01.832 } 00:16:01.832 ] 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "subsystem": "vmd", 00:16:01.832 "config": [] 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "subsystem": "accel", 00:16:01.832 "config": [ 00:16:01.832 { 00:16:01.832 "method": "accel_set_options", 00:16:01.832 "params": { 00:16:01.832 "buf_count": 2048, 00:16:01.832 "large_cache_size": 16, 00:16:01.832 "sequence_count": 2048, 00:16:01.832 "small_cache_size": 128, 00:16:01.832 "task_count": 2048 00:16:01.832 } 00:16:01.832 } 00:16:01.832 ] 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "subsystem": "bdev", 00:16:01.832 "config": [ 00:16:01.832 { 00:16:01.832 "method": "bdev_set_options", 00:16:01.832 "params": { 00:16:01.832 "bdev_auto_examine": true, 00:16:01.832 "bdev_io_cache_size": 256, 00:16:01.832 "bdev_io_pool_size": 65535, 00:16:01.832 "iobuf_large_cache_size": 16, 00:16:01.832 "iobuf_small_cache_size": 128 00:16:01.832 } 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "method": "bdev_raid_set_options", 00:16:01.832 "params": { 00:16:01.832 "process_max_bandwidth_mb_sec": 0, 00:16:01.832 "process_window_size_kb": 1024 00:16:01.832 } 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "method": "bdev_iscsi_set_options", 00:16:01.832 "params": { 00:16:01.832 "timeout_sec": 30 00:16:01.832 } 00:16:01.832 }, 00:16:01.832 { 00:16:01.833 "method": "bdev_nvme_set_options", 00:16:01.833 "params": { 00:16:01.833 "action_on_timeout": "none", 00:16:01.833 "allow_accel_sequence": false, 00:16:01.833 "arbitration_burst": 0, 00:16:01.833 "bdev_retry_count": 3, 00:16:01.833 "ctrlr_loss_timeout_sec": 0, 00:16:01.833 "delay_cmd_submit": true, 00:16:01.833 "dhchap_dhgroups": [ 00:16:01.833 "null", 00:16:01.833 "ffdhe2048", 00:16:01.833 "ffdhe3072", 00:16:01.833 "ffdhe4096", 00:16:01.833 "ffdhe6144", 00:16:01.833 "ffdhe8192" 00:16:01.833 ], 00:16:01.833 "dhchap_digests": [ 00:16:01.833 "sha256", 00:16:01.833 "sha384", 00:16:01.833 "sha512" 00:16:01.833 ], 00:16:01.833 "disable_auto_failback": false, 00:16:01.833 "fast_io_fail_timeout_sec": 0, 00:16:01.833 "generate_uuids": false, 00:16:01.833 "high_priority_weight": 0, 00:16:01.833 "io_path_stat": false, 00:16:01.833 "io_queue_requests": 0, 00:16:01.833 "keep_alive_timeout_ms": 10000, 00:16:01.833 "low_priority_weight": 0, 00:16:01.833 "medium_priority_weight": 0, 00:16:01.833 "nvme_adminq_poll_period_us": 10000, 00:16:01.833 "nvme_error_stat": false, 00:16:01.833 "nvme_ioq_poll_period_us": 0, 00:16:01.833 "rdma_cm_event_timeout_ms": 0, 00:16:01.833 "rdma_max_cq_size": 0, 00:16:01.833 "rdma_srq_size": 0, 00:16:01.833 "reconnect_delay_sec": 0, 00:16:01.833 "timeout_admin_us": 0, 00:16:01.833 "timeout_us": 0, 00:16:01.833 "transport_ack_timeout": 0, 00:16:01.833 "transport_retry_count": 4, 00:16:01.833 "transport_tos": 0 00:16:01.833 } 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "method": "bdev_nvme_set_hotplug", 00:16:01.833 "params": { 00:16:01.833 "enable": false, 00:16:01.833 "period_us": 100000 00:16:01.833 } 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "method": "bdev_malloc_create", 00:16:01.833 "params": { 00:16:01.833 "block_size": 4096, 00:16:01.833 "dif_is_head_of_md": false, 00:16:01.833 "dif_pi_format": 0, 00:16:01.833 "dif_type": 0, 00:16:01.833 "md_size": 0, 00:16:01.833 "name": "malloc0", 00:16:01.833 "num_blocks": 8192, 00:16:01.833 "optimal_io_boundary": 0, 00:16:01.833 "physical_block_size": 4096, 00:16:01.833 "uuid": "54e51564-21a9-4119-9275-971bc2bcd002" 00:16:01.833 } 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "method": "bdev_wait_for_examine" 00:16:01.833 } 00:16:01.833 ] 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "subsystem": "nbd", 00:16:01.833 "config": [] 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "subsystem": "scheduler", 00:16:01.833 "config": [ 00:16:01.833 { 00:16:01.833 "method": "framework_set_scheduler", 00:16:01.833 "params": { 00:16:01.833 "name": "static" 00:16:01.833 } 00:16:01.833 } 00:16:01.833 ] 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "subsystem": "nvmf", 00:16:01.833 "config": [ 00:16:01.833 { 00:16:01.833 "method": "nvmf_set_config", 00:16:01.833 "params": { 00:16:01.833 "admin_cmd_passthru": { 00:16:01.833 "identify_ctrlr": false 00:16:01.833 }, 00:16:01.833 "discovery_filter": "match_any" 00:16:01.833 } 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "method": "nvmf_set_max_subsystems", 00:16:01.833 "params": { 00:16:01.833 "max_subsystems": 1024 00:16:01.833 } 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "method": "nvmf_set_crdt", 00:16:01.833 "params": { 00:16:01.833 "crdt1": 0, 00:16:01.833 "crdt2": 0, 00:16:01.833 "crdt3": 0 00:16:01.833 } 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "method": "nvmf_create_transport", 00:16:01.833 "params": { 00:16:01.833 "abort_timeout_sec": 1, 00:16:01.833 "ack_timeout": 0, 00:16:01.833 "buf_cache_size": 4294967295, 00:16:01.833 "c2h_success": false, 00:16:01.833 "data_wr_pool_size": 0, 00:16:01.833 "dif_insert_or_strip": false, 00:16:01.833 "in_capsule_data_size": 4096, 00:16:01.833 "io_unit_size": 131072, 00:16:01.833 "max_aq_depth": 128, 00:16:01.833 "max_io_qpairs_per_ctrlr": 127, 00:16:01.833 "max_io_size": 131072, 00:16:01.833 "max_queue_depth": 128, 00:16:01.833 "num_shared_buffers": 511, 00:16:01.833 "sock_priority": 0, 00:16:01.833 "trtype": "TCP", 00:16:01.833 "zcopy": false 00:16:01.833 } 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "method": "nvmf_create_subsystem", 00:16:01.833 "params": { 00:16:01.833 "allow_any_host": false, 00:16:01.833 "ana_reporting": false, 00:16:01.833 "max_cntlid": 65519, 00:16:01.833 "max_namespaces": 32, 00:16:01.833 "min_cntlid": 1, 00:16:01.833 "model_number": "SPDK bdev Controller", 00:16:01.833 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.833 "serial_number": "00000000000000000000" 00:16:01.833 } 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "method": "nvmf_subsystem_add_host", 00:16:01.833 "params": { 00:16:01.833 "host": "nqn.2016-06.io.spdk:host1", 00:16:01.833 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.833 "psk": "key0" 00:16:01.833 } 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "method": "nvmf_subsystem_add_ns", 00:16:01.833 "params": { 00:16:01.833 "namespace": { 00:16:01.833 "bdev_name": "malloc0", 00:16:01.833 "nguid": "54E5156421A941199275971BC2BCD002", 00:16:01.833 "no_auto_visible": false, 00:16:01.833 "nsid": 1, 00:16:01.833 "uuid": "54e51564-21a9-4119-9275-971bc2bcd002" 00:16:01.833 }, 00:16:01.833 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:01.833 } 00:16:01.833 }, 00:16:01.833 { 00:16:01.833 "method": "nvmf_subsystem_add_listener", 00:16:01.833 "params": { 00:16:01.833 "listen_address": { 00:16:01.833 "adrfam": "IPv4", 00:16:01.833 "traddr": "10.0.0.2", 00:16:01.833 "trsvcid": "4420", 00:16:01.833 "trtype": "TCP" 00:16:01.833 }, 00:16:01.833 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.833 "secure_channel": false, 00:16:01.833 "sock_impl": "ssl" 00:16:01.833 } 00:16:01.833 } 00:16:01.833 ] 00:16:01.833 } 00:16:01.833 ] 00:16:01.833 }' 00:16:01.833 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:01.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.833 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84556 00:16:01.833 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:01.833 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84556 00:16:01.833 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84556 ']' 00:16:01.833 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.833 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.833 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.833 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.833 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.091 [2024-07-25 12:23:16.701056] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:02.091 [2024-07-25 12:23:16.701365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.091 [2024-07-25 12:23:16.836592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.091 [2024-07-25 12:23:16.948069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.091 [2024-07-25 12:23:16.948355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.091 [2024-07-25 12:23:16.948376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.091 [2024-07-25 12:23:16.948387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.091 [2024-07-25 12:23:16.948395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.091 [2024-07-25 12:23:16.948485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.349 [2024-07-25 12:23:17.188702] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.607 [2024-07-25 12:23:17.220642] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:02.607 [2024-07-25 12:23:17.221122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=84600 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 84600 /var/tmp/bdevperf.sock 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84600 ']' 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.866 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:16:02.866 "subsystems": [ 00:16:02.866 { 00:16:02.866 "subsystem": "keyring", 00:16:02.866 "config": [ 00:16:02.866 { 00:16:02.866 "method": "keyring_file_add_key", 00:16:02.866 "params": { 00:16:02.866 "name": "key0", 00:16:02.866 "path": "/tmp/tmp.gqH2L2X4E7" 00:16:02.866 } 00:16:02.866 } 00:16:02.866 ] 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "subsystem": "iobuf", 00:16:02.866 "config": [ 00:16:02.866 { 00:16:02.866 "method": "iobuf_set_options", 00:16:02.866 "params": { 00:16:02.866 "large_bufsize": 135168, 00:16:02.866 "large_pool_count": 1024, 00:16:02.866 "small_bufsize": 8192, 00:16:02.866 "small_pool_count": 8192 00:16:02.866 } 00:16:02.866 } 00:16:02.866 ] 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "subsystem": "sock", 00:16:02.866 "config": [ 00:16:02.866 { 00:16:02.866 "method": "sock_set_default_impl", 00:16:02.866 "params": { 00:16:02.866 "impl_name": "posix" 00:16:02.866 } 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "method": "sock_impl_set_options", 00:16:02.866 "params": { 00:16:02.866 "enable_ktls": false, 00:16:02.866 "enable_placement_id": 0, 00:16:02.866 "enable_quickack": false, 00:16:02.866 "enable_recv_pipe": true, 00:16:02.866 "enable_zerocopy_send_client": false, 00:16:02.866 "enable_zerocopy_send_server": true, 00:16:02.866 "impl_name": "ssl", 00:16:02.866 "recv_buf_size": 4096, 00:16:02.866 "send_buf_size": 4096, 00:16:02.866 "tls_version": 0, 00:16:02.866 "zerocopy_threshold": 0 00:16:02.866 } 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "method": "sock_impl_set_options", 00:16:02.866 "params": { 00:16:02.866 "enable_ktls": false, 00:16:02.866 "enable_placement_id": 0, 00:16:02.866 "enable_quickack": false, 00:16:02.866 "enable_recv_pipe": true, 00:16:02.866 "enable_zerocopy_send_client": false, 00:16:02.866 "enable_zerocopy_send_server": true, 00:16:02.866 "impl_name": "posix", 00:16:02.866 "recv_buf_size": 2097152, 00:16:02.866 "send_buf_size": 2097152, 00:16:02.866 "tls_version": 0, 00:16:02.866 "zerocopy_threshold": 0 00:16:02.866 } 00:16:02.866 } 00:16:02.866 ] 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "subsystem": "vmd", 00:16:02.866 "config": [] 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "subsystem": "accel", 00:16:02.866 "config": [ 00:16:02.866 { 00:16:02.866 "method": "accel_set_options", 00:16:02.866 "params": { 00:16:02.866 "buf_count": 2048, 00:16:02.866 "large_cache_size": 16, 00:16:02.866 "sequence_count": 2048, 00:16:02.866 "small_cache_size": 128, 00:16:02.866 "task_count": 2048 00:16:02.866 } 00:16:02.866 } 00:16:02.866 ] 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "subsystem": "bdev", 00:16:02.866 "config": [ 00:16:02.866 { 00:16:02.866 "method": "bdev_set_options", 00:16:02.866 "params": { 00:16:02.866 "bdev_auto_examine": true, 00:16:02.866 "bdev_io_cache_size": 256, 00:16:02.866 "bdev_io_pool_size": 65535, 00:16:02.866 "iobuf_large_cache_size": 16, 00:16:02.866 "iobuf_small_cache_size": 128 00:16:02.867 } 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "method": "bdev_raid_set_options", 00:16:02.867 "params": { 00:16:02.867 "process_max_bandwidth_mb_sec": 0, 00:16:02.867 "process_window_size_kb": 1024 00:16:02.867 } 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "method": "bdev_iscsi_set_options", 00:16:02.867 "params": { 00:16:02.867 "timeout_sec": 30 00:16:02.867 } 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "method": "bdev_nvme_set_options", 00:16:02.867 "params": { 00:16:02.867 "action_on_timeout": "none", 00:16:02.867 "allow_accel_sequence": false, 00:16:02.867 "arbitration_burst": 0, 00:16:02.867 "bdev_retry_count": 3, 00:16:02.867 "ctrlr_loss_timeout_sec": 0, 00:16:02.867 "delay_cmd_submit": true, 00:16:02.867 "dhchap_dhgroups": [ 00:16:02.867 "null", 00:16:02.867 "ffdhe2048", 00:16:02.867 "ffdhe3072", 00:16:02.867 "ffdhe4096", 00:16:02.867 "ffdhe6144", 00:16:02.867 "ffdhe8192" 00:16:02.867 ], 00:16:02.867 "dhchap_digests": [ 00:16:02.867 "sha256", 00:16:02.867 "sha384", 00:16:02.867 "sha512" 00:16:02.867 ], 00:16:02.867 "disable_auto_failback": false, 00:16:02.867 "fast_io_fail_timeout_sec": 0, 00:16:02.867 "generate_uuids": false, 00:16:02.867 "high_priority_weight": 0, 00:16:02.867 "io_path_stat": false, 00:16:02.867 "io_queue_requests": 512, 00:16:02.867 "keep_alive_timeout_ms": 10000, 00:16:02.867 "low_priority_weight": 0, 00:16:02.867 "medium_priority_weight": 0, 00:16:02.867 "nvme_adminq_poll_period_us": 10000, 00:16:02.867 "nvme_error_stat": false, 00:16:02.867 "nvme_ioq_poll_period_us": 0, 00:16:02.867 "rdma_cm_event_timeout_ms": 0, 00:16:02.867 "rdma_max_cq_size": 0, 00:16:02.867 "rdma_srq_size": 0, 00:16:02.867 "reconnect_delay_sec": 0, 00:16:02.867 "timeout_admin_us": 0, 00:16:02.867 "timeout_us": 0, 00:16:02.867 "transport_ack_timeout": 0, 00:16:02.867 "transport_retry_count": 4, 00:16:02.867 "transport_tos": 0 00:16:02.867 } 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "method": "bdev_nvme_attach_controller", 00:16:02.867 "params": { 00:16:02.867 "adrfam": "IPv4", 00:16:02.867 "ctrlr_loss_timeout_sec": 0, 00:16:02.867 "ddgst": false, 00:16:02.867 "fast_io_fail_timeout_sec": 0, 00:16:02.867 "hdgst": false, 00:16:02.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:02.867 "name": "nvme0", 00:16:02.867 "prchk_guard": false, 00:16:02.867 "prchk_reftag": false, 00:16:02.867 "psk": "key0", 00:16:02.867 "reconnect_delay_sec": 0, 00:16:02.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.867 "traddr": "10.0.0.2", 00:16:02.867 "trsvcid": "4420", 00:16:02.867 "trtype": "TCP" 00:16:02.867 } 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "method": "bdev_nvme_set_hotplug", 00:16:02.867 "params": { 00:16:02.867 "enable": false, 00:16:02.867 "period_us": 100000 00:16:02.867 } 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "method": "bdev_enable_histogram", 00:16:02.867 "params": { 00:16:02.867 "enable": true, 00:16:02.867 "name": "nvme0n1" 00:16:02.867 } 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "method": "bdev_wait_for_examine" 00:16:02.867 } 00:16:02.867 ] 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "subsystem": "nbd", 00:16:02.867 "config": [] 00:16:02.867 } 00:16:02.867 ] 00:16:02.867 }' 00:16:03.125 [2024-07-25 12:23:17.787067] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:03.125 [2024-07-25 12:23:17.787483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84600 ] 00:16:03.125 [2024-07-25 12:23:17.924244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.383 [2024-07-25 12:23:18.047860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.383 [2024-07-25 12:23:18.230360] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:03.950 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.950 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:03.950 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:03.950 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:16:04.225 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.225 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:04.515 Running I/O for 1 seconds... 00:16:05.451 00:16:05.451 Latency(us) 00:16:05.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.451 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:05.451 Verification LBA range: start 0x0 length 0x2000 00:16:05.451 nvme0n1 : 1.03 3833.02 14.97 0.00 0.00 32959.55 11319.85 24188.74 00:16:05.451 =================================================================================================================== 00:16:05.451 Total : 3833.02 14.97 0.00 0.00 32959.55 11319.85 24188.74 00:16:05.451 0 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:05.451 nvmf_trace.0 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84600 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84600 ']' 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84600 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84600 00:16:05.451 killing process with pid 84600 00:16:05.451 Received shutdown signal, test time was about 1.000000 seconds 00:16:05.451 00:16:05.451 Latency(us) 00:16:05.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.451 =================================================================================================================== 00:16:05.451 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84600' 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84600 00:16:05.451 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84600 00:16:05.711 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:05.711 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.711 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:16:05.711 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.711 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:16:05.711 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.711 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.970 rmmod nvme_tcp 00:16:05.970 rmmod nvme_fabrics 00:16:05.970 rmmod nvme_keyring 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 84556 ']' 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 84556 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84556 ']' 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84556 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84556 00:16:05.970 killing process with pid 84556 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84556' 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84556 00:16:05.970 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84556 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.FXXJJgoVcr /tmp/tmp.94NU8MLevd /tmp/tmp.gqH2L2X4E7 00:16:06.231 ************************************ 00:16:06.231 END TEST nvmf_tls 00:16:06.231 ************************************ 00:16:06.231 00:16:06.231 real 1m28.782s 00:16:06.231 user 2m21.472s 00:16:06.231 sys 0m28.746s 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.231 ************************************ 00:16:06.231 START TEST nvmf_fips 00:16:06.231 ************************************ 00:16:06.231 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:06.231 * Looking for test storage... 00:16:06.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:16:06.231 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:16:06.490 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:16:06.491 Error setting digest 00:16:06.491 0052BE8BF87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:06.491 0052BE8BF87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:06.491 Cannot find device "nvmf_tgt_br" 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:06.491 Cannot find device "nvmf_tgt_br2" 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:06.491 Cannot find device "nvmf_tgt_br" 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:06.491 Cannot find device "nvmf_tgt_br2" 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:16:06.491 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:06.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:06.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:06.750 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:06.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:16:06.751 00:16:06.751 --- 10.0.0.2 ping statistics --- 00:16:06.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.751 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:06.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:06.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:06.751 00:16:06.751 --- 10.0.0.3 ping statistics --- 00:16:06.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.751 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:06.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:06.751 00:16:06.751 --- 10.0.0.1 ping statistics --- 00:16:06.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.751 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.751 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:07.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=84880 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 84880 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84880 ']' 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.010 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:07.010 [2024-07-25 12:23:21.720862] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:07.010 [2024-07-25 12:23:21.720976] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.010 [2024-07-25 12:23:21.856618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.268 [2024-07-25 12:23:21.970501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.268 [2024-07-25 12:23:21.970550] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.268 [2024-07-25 12:23:21.970561] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.268 [2024-07-25 12:23:21.970569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.268 [2024-07-25 12:23:21.970576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.268 [2024-07-25 12:23:21.970604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.836 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.836 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:16:07.836 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.836 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.836 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:08.095 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.095 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:08.095 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:08.095 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:08.095 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:08.095 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:08.095 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:08.095 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:08.095 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.353 [2024-07-25 12:23:22.978627] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.353 [2024-07-25 12:23:22.994599] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:08.353 [2024-07-25 12:23:22.994841] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.353 [2024-07-25 12:23:23.026519] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:08.353 malloc0 00:16:08.353 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:08.353 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=84938 00:16:08.353 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:08.353 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 84938 /var/tmp/bdevperf.sock 00:16:08.353 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84938 ']' 00:16:08.353 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.353 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.353 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.353 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.353 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:08.353 [2024-07-25 12:23:23.139116] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:08.353 [2024-07-25 12:23:23.139556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84938 ] 00:16:08.614 [2024-07-25 12:23:23.281144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.614 [2024-07-25 12:23:23.408747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.548 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.548 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:16:09.548 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:09.548 [2024-07-25 12:23:24.293554] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:09.548 [2024-07-25 12:23:24.293673] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:09.548 TLSTESTn1 00:16:09.548 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:09.806 Running I/O for 10 seconds... 00:16:19.775 00:16:19.775 Latency(us) 00:16:19.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.775 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:19.775 Verification LBA range: start 0x0 length 0x2000 00:16:19.775 TLSTESTn1 : 10.02 3911.77 15.28 0.00 0.00 32657.06 7179.17 35270.28 00:16:19.775 =================================================================================================================== 00:16:19.775 Total : 3911.77 15.28 0.00 0.00 32657.06 7179.17 35270.28 00:16:19.775 0 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:19.775 nvmf_trace.0 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84938 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84938 ']' 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84938 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.775 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84938 00:16:20.033 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:20.033 killing process with pid 84938 00:16:20.033 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:20.033 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84938' 00:16:20.033 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84938 00:16:20.033 Received shutdown signal, test time was about 10.000000 seconds 00:16:20.033 00:16:20.033 Latency(us) 00:16:20.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.033 =================================================================================================================== 00:16:20.033 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:20.033 [2024-07-25 12:23:34.651662] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:20.033 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84938 00:16:20.292 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:20.292 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:20.292 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:20.292 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:20.292 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:20.292 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:20.292 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:20.292 rmmod nvme_tcp 00:16:20.292 rmmod nvme_fabrics 00:16:20.292 rmmod nvme_keyring 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 84880 ']' 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 84880 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84880 ']' 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84880 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84880 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:20.292 killing process with pid 84880 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84880' 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84880 00:16:20.292 [2024-07-25 12:23:35.051791] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:20.292 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84880 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:20.551 ************************************ 00:16:20.551 END TEST nvmf_fips 00:16:20.551 ************************************ 00:16:20.551 00:16:20.551 real 0m14.392s 00:16:20.551 user 0m19.444s 00:16:20.551 sys 0m5.832s 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.551 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:20.813 12:23:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:16:20.813 12:23:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:16:20.813 12:23:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:16:20.813 ************************************ 00:16:20.813 END TEST nvmf_target_extra 00:16:20.813 ************************************ 00:16:20.813 00:16:20.813 real 6m35.814s 00:16:20.813 user 15m57.607s 00:16:20.813 sys 1m19.932s 00:16:20.813 12:23:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.813 12:23:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.813 12:23:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:20.813 12:23:35 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:20.813 12:23:35 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.813 12:23:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:20.813 ************************************ 00:16:20.813 START TEST nvmf_host 00:16:20.813 ************************************ 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:20.813 * Looking for test storage... 00:16:20.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.813 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.814 ************************************ 00:16:20.814 START TEST nvmf_multicontroller 00:16:20.814 ************************************ 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:20.814 * Looking for test storage... 00:16:20.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.814 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.071 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:21.072 Cannot find device "nvmf_tgt_br" 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.072 Cannot find device "nvmf_tgt_br2" 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:21.072 Cannot find device "nvmf_tgt_br" 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:21.072 Cannot find device "nvmf_tgt_br2" 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:21.072 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:21.330 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:21.330 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:21.330 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:21.330 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:21.331 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:21.331 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:21.331 12:23:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:21.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:21.331 00:16:21.331 --- 10.0.0.2 ping statistics --- 00:16:21.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.331 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:21.331 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:21.331 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:21.331 00:16:21.331 --- 10.0.0.3 ping statistics --- 00:16:21.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.331 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:21.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:21.331 00:16:21.331 --- 10.0.0.1 ping statistics --- 00:16:21.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.331 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85336 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85336 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 85336 ']' 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.331 12:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:21.331 [2024-07-25 12:23:36.100307] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:21.331 [2024-07-25 12:23:36.100416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.589 [2024-07-25 12:23:36.241344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:21.589 [2024-07-25 12:23:36.377038] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.589 [2024-07-25 12:23:36.377124] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.589 [2024-07-25 12:23:36.377148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.589 [2024-07-25 12:23:36.377159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.589 [2024-07-25 12:23:36.377168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.589 [2024-07-25 12:23:36.377361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.589 [2024-07-25 12:23:36.378166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.589 [2024-07-25 12:23:36.378219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.524 [2024-07-25 12:23:37.117654] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.524 Malloc0 00:16:22.524 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.525 [2024-07-25 12:23:37.190493] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.525 [2024-07-25 12:23:37.198409] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.525 Malloc1 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85388 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85388 /var/tmp/bdevperf.sock 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 85388 ']' 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:22.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.525 12:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.460 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.460 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:16:23.460 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:23.460 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.460 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.719 NVMe0n1 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.719 1 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.719 2024/07/25 12:23:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:23.719 request: 00:16:23.719 { 00:16:23.719 "method": "bdev_nvme_attach_controller", 00:16:23.719 "params": { 00:16:23.719 "name": "NVMe0", 00:16:23.719 "trtype": "tcp", 00:16:23.719 "traddr": "10.0.0.2", 00:16:23.719 "adrfam": "ipv4", 00:16:23.719 "trsvcid": "4420", 00:16:23.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.719 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:23.719 "hostaddr": "10.0.0.2", 00:16:23.719 "hostsvcid": "60000", 00:16:23.719 "prchk_reftag": false, 00:16:23.719 "prchk_guard": false, 00:16:23.719 "hdgst": false, 00:16:23.719 "ddgst": false 00:16:23.719 } 00:16:23.719 } 00:16:23.719 Got JSON-RPC error response 00:16:23.719 GoRPCClient: error on JSON-RPC call 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.719 2024/07/25 12:23:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:23.719 request: 00:16:23.719 { 00:16:23.719 "method": "bdev_nvme_attach_controller", 00:16:23.719 "params": { 00:16:23.719 "name": "NVMe0", 00:16:23.719 "trtype": "tcp", 00:16:23.719 "traddr": "10.0.0.2", 00:16:23.719 "adrfam": "ipv4", 00:16:23.719 "trsvcid": "4420", 00:16:23.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:23.719 "hostaddr": "10.0.0.2", 00:16:23.719 "hostsvcid": "60000", 00:16:23.719 "prchk_reftag": false, 00:16:23.719 "prchk_guard": false, 00:16:23.719 "hdgst": false, 00:16:23.719 "ddgst": false 00:16:23.719 } 00:16:23.719 } 00:16:23.719 Got JSON-RPC error response 00:16:23.719 GoRPCClient: error on JSON-RPC call 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:23.719 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.720 2024/07/25 12:23:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:16:23.720 request: 00:16:23.720 { 00:16:23.720 "method": "bdev_nvme_attach_controller", 00:16:23.720 "params": { 00:16:23.720 "name": "NVMe0", 00:16:23.720 "trtype": "tcp", 00:16:23.720 "traddr": "10.0.0.2", 00:16:23.720 "adrfam": "ipv4", 00:16:23.720 "trsvcid": "4420", 00:16:23.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.720 "hostaddr": "10.0.0.2", 00:16:23.720 "hostsvcid": "60000", 00:16:23.720 "prchk_reftag": false, 00:16:23.720 "prchk_guard": false, 00:16:23.720 "hdgst": false, 00:16:23.720 "ddgst": false, 00:16:23.720 "multipath": "disable" 00:16:23.720 } 00:16:23.720 } 00:16:23.720 Got JSON-RPC error response 00:16:23.720 GoRPCClient: error on JSON-RPC call 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.720 2024/07/25 12:23:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:23.720 request: 00:16:23.720 { 00:16:23.720 "method": "bdev_nvme_attach_controller", 00:16:23.720 "params": { 00:16:23.720 "name": "NVMe0", 00:16:23.720 "trtype": "tcp", 00:16:23.720 "traddr": "10.0.0.2", 00:16:23.720 "adrfam": "ipv4", 00:16:23.720 "trsvcid": "4420", 00:16:23.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.720 "hostaddr": "10.0.0.2", 00:16:23.720 "hostsvcid": "60000", 00:16:23.720 "prchk_reftag": false, 00:16:23.720 "prchk_guard": false, 00:16:23.720 "hdgst": false, 00:16:23.720 "ddgst": false, 00:16:23.720 "multipath": "failover" 00:16:23.720 } 00:16:23.720 } 00:16:23.720 Got JSON-RPC error response 00:16:23.720 GoRPCClient: error on JSON-RPC call 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.720 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.720 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.978 00:16:23.978 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.978 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:23.978 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.978 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:23.978 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:23.978 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.978 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:23.978 12:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:24.914 0 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85388 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 85388 ']' 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 85388 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85388 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:25.173 killing process with pid 85388 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85388' 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 85388 00:16:25.173 12:23:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 85388 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:16:25.431 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:25.431 [2024-07-25 12:23:37.320241] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:25.431 [2024-07-25 12:23:37.320355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85388 ] 00:16:25.431 [2024-07-25 12:23:37.461135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.431 [2024-07-25 12:23:37.587615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.431 [2024-07-25 12:23:38.617601] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 86e60c7f-e673-4a99-908c-173b8aaf6809 already exists 00:16:25.431 [2024-07-25 12:23:38.617678] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:86e60c7f-e673-4a99-908c-173b8aaf6809 alias for bdev NVMe1n1 00:16:25.431 [2024-07-25 12:23:38.617699] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:25.431 Running I/O for 1 seconds... 00:16:25.431 00:16:25.431 Latency(us) 00:16:25.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.431 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:25.431 NVMe0n1 : 1.00 19690.69 76.92 0.00 0.00 6490.43 3321.48 11736.90 00:16:25.431 =================================================================================================================== 00:16:25.431 Total : 19690.69 76.92 0.00 0.00 6490.43 3321.48 11736.90 00:16:25.431 Received shutdown signal, test time was about 1.000000 seconds 00:16:25.431 00:16:25.431 Latency(us) 00:16:25.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.431 =================================================================================================================== 00:16:25.431 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.431 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.431 rmmod nvme_tcp 00:16:25.431 rmmod nvme_fabrics 00:16:25.431 rmmod nvme_keyring 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85336 ']' 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85336 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 85336 ']' 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 85336 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85336 00:16:25.431 killing process with pid 85336 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85336' 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 85336 00:16:25.431 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 85336 00:16:25.689 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.690 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.690 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.690 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.690 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.690 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.690 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.690 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.690 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:25.947 00:16:25.947 real 0m4.999s 00:16:25.947 user 0m15.494s 00:16:25.947 sys 0m1.111s 00:16:25.947 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.947 ************************************ 00:16:25.947 12:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:25.947 END TEST nvmf_multicontroller 00:16:25.947 ************************************ 00:16:25.947 12:23:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:25.947 12:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:25.947 12:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.947 12:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.947 ************************************ 00:16:25.947 START TEST nvmf_aer 00:16:25.947 ************************************ 00:16:25.947 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:25.947 * Looking for test storage... 00:16:25.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:25.948 Cannot find device "nvmf_tgt_br" 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # true 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.948 Cannot find device "nvmf_tgt_br2" 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # true 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:25.948 Cannot find device "nvmf_tgt_br" 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # true 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:25.948 Cannot find device "nvmf_tgt_br2" 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # true 00:16:25.948 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:26.206 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:26.206 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:26.207 12:23:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.207 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.207 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:26.207 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:26.207 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.207 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:26.207 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:26.207 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:26.207 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:26.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:16:26.464 00:16:26.464 --- 10.0.0.2 ping statistics --- 00:16:26.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.464 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:26.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:26.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:26.464 00:16:26.464 --- 10.0.0.3 ping statistics --- 00:16:26.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.464 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:26.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:26.464 00:16:26.464 --- 10.0.0.1 ping statistics --- 00:16:26.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.464 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=85650 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 85650 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 85650 ']' 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:26.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.464 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:26.465 12:23:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:26.465 [2024-07-25 12:23:41.168331] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:26.465 [2024-07-25 12:23:41.168441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.465 [2024-07-25 12:23:41.306360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.721 [2024-07-25 12:23:41.427547] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.721 [2024-07-25 12:23:41.427611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.721 [2024-07-25 12:23:41.427639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.721 [2024-07-25 12:23:41.427647] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.721 [2024-07-25 12:23:41.427655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.721 [2024-07-25 12:23:41.428068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.721 [2024-07-25 12:23:41.428223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.721 [2024-07-25 12:23:41.428639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.721 [2024-07-25 12:23:41.428671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.656 [2024-07-25 12:23:42.204990] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.656 Malloc0 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.656 [2024-07-25 12:23:42.271090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.656 [ 00:16:27.656 { 00:16:27.656 "allow_any_host": true, 00:16:27.656 "hosts": [], 00:16:27.656 "listen_addresses": [], 00:16:27.656 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:27.656 "subtype": "Discovery" 00:16:27.656 }, 00:16:27.656 { 00:16:27.656 "allow_any_host": true, 00:16:27.656 "hosts": [], 00:16:27.656 "listen_addresses": [ 00:16:27.656 { 00:16:27.656 "adrfam": "IPv4", 00:16:27.656 "traddr": "10.0.0.2", 00:16:27.656 "trsvcid": "4420", 00:16:27.656 "trtype": "TCP" 00:16:27.656 } 00:16:27.656 ], 00:16:27.656 "max_cntlid": 65519, 00:16:27.656 "max_namespaces": 2, 00:16:27.656 "min_cntlid": 1, 00:16:27.656 "model_number": "SPDK bdev Controller", 00:16:27.656 "namespaces": [ 00:16:27.656 { 00:16:27.656 "bdev_name": "Malloc0", 00:16:27.656 "name": "Malloc0", 00:16:27.656 "nguid": "A0D64027DFEF4D038DA1D80F5421B125", 00:16:27.656 "nsid": 1, 00:16:27.656 "uuid": "a0d64027-dfef-4d03-8da1-d80f5421b125" 00:16:27.656 } 00:16:27.656 ], 00:16:27.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.656 "serial_number": "SPDK00000000000001", 00:16:27.656 "subtype": "NVMe" 00:16:27.656 } 00:16:27.656 ] 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=85704 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:16:27.656 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:27.657 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:27.657 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:27.657 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:16:27.657 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:27.657 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.657 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.915 Malloc1 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.915 Asynchronous Event Request test 00:16:27.915 Attaching to 10.0.0.2 00:16:27.915 Attached to 10.0.0.2 00:16:27.915 Registering asynchronous event callbacks... 00:16:27.915 Starting namespace attribute notice tests for all controllers... 00:16:27.915 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:27.915 aer_cb - Changed Namespace 00:16:27.915 Cleaning up... 00:16:27.915 [ 00:16:27.915 { 00:16:27.915 "allow_any_host": true, 00:16:27.915 "hosts": [], 00:16:27.915 "listen_addresses": [], 00:16:27.915 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:27.915 "subtype": "Discovery" 00:16:27.915 }, 00:16:27.915 { 00:16:27.915 "allow_any_host": true, 00:16:27.915 "hosts": [], 00:16:27.915 "listen_addresses": [ 00:16:27.915 { 00:16:27.915 "adrfam": "IPv4", 00:16:27.915 "traddr": "10.0.0.2", 00:16:27.915 "trsvcid": "4420", 00:16:27.915 "trtype": "TCP" 00:16:27.915 } 00:16:27.915 ], 00:16:27.915 "max_cntlid": 65519, 00:16:27.915 "max_namespaces": 2, 00:16:27.915 "min_cntlid": 1, 00:16:27.915 "model_number": "SPDK bdev Controller", 00:16:27.915 "namespaces": [ 00:16:27.915 { 00:16:27.915 "bdev_name": "Malloc0", 00:16:27.915 "name": "Malloc0", 00:16:27.915 "nguid": "A0D64027DFEF4D038DA1D80F5421B125", 00:16:27.915 "nsid": 1, 00:16:27.915 "uuid": "a0d64027-dfef-4d03-8da1-d80f5421b125" 00:16:27.915 }, 00:16:27.915 { 00:16:27.915 "bdev_name": "Malloc1", 00:16:27.915 "name": "Malloc1", 00:16:27.915 "nguid": "526E8702974148A7BCEFED38B96BA227", 00:16:27.915 "nsid": 2, 00:16:27.915 "uuid": "526e8702-9741-48a7-bcef-ed38b96ba227" 00:16:27.915 } 00:16:27.915 ], 00:16:27.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.915 "serial_number": "SPDK00000000000001", 00:16:27.915 "subtype": "NVMe" 00:16:27.915 } 00:16:27.915 ] 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 85704 00:16:27.915 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:27.916 rmmod nvme_tcp 00:16:27.916 rmmod nvme_fabrics 00:16:27.916 rmmod nvme_keyring 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 85650 ']' 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 85650 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 85650 ']' 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 85650 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.916 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85650 00:16:28.174 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.174 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.174 killing process with pid 85650 00:16:28.174 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85650' 00:16:28.174 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 85650 00:16:28.174 12:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 85650 00:16:28.174 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.174 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.174 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.174 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.174 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.174 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.174 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.174 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.433 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:28.433 00:16:28.433 real 0m2.448s 00:16:28.433 user 0m6.569s 00:16:28.433 sys 0m0.683s 00:16:28.433 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.433 ************************************ 00:16:28.433 END TEST nvmf_aer 00:16:28.434 ************************************ 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.434 ************************************ 00:16:28.434 START TEST nvmf_async_init 00:16:28.434 ************************************ 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:28.434 * Looking for test storage... 00:16:28.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=31753a4365cd4b54ab319ae1f32822cf 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:28.434 Cannot find device "nvmf_tgt_br" 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.434 Cannot find device "nvmf_tgt_br2" 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:28.434 Cannot find device "nvmf_tgt_br" 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:16:28.434 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:28.691 Cannot find device "nvmf_tgt_br2" 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.691 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:28.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:16:28.949 00:16:28.949 --- 10.0.0.2 ping statistics --- 00:16:28.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.949 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:28.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:16:28.949 00:16:28.949 --- 10.0.0.3 ping statistics --- 00:16:28.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.949 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:28.949 00:16:28.949 --- 10.0.0.1 ping statistics --- 00:16:28.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.949 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=85879 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 85879 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 85879 ']' 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:28.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.949 12:23:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:28.949 [2024-07-25 12:23:43.665381] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:28.949 [2024-07-25 12:23:43.665524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.949 [2024-07-25 12:23:43.805540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.208 [2024-07-25 12:23:43.913179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.208 [2024-07-25 12:23:43.913239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.208 [2024-07-25 12:23:43.913251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.208 [2024-07-25 12:23:43.913260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.208 [2024-07-25 12:23:43.913267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.208 [2024-07-25 12:23:43.913305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.774 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.774 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:16:29.774 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:29.774 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:29.774 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.037 [2024-07-25 12:23:44.655056] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.037 null0 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 31753a4365cd4b54ab319ae1f32822cf 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.037 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.037 [2024-07-25 12:23:44.695197] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.038 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.038 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:30.038 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.038 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.309 nvme0n1 00:16:30.309 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.309 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:30.309 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.309 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.309 [ 00:16:30.309 { 00:16:30.309 "aliases": [ 00:16:30.309 "31753a43-65cd-4b54-ab31-9ae1f32822cf" 00:16:30.309 ], 00:16:30.309 "assigned_rate_limits": { 00:16:30.309 "r_mbytes_per_sec": 0, 00:16:30.309 "rw_ios_per_sec": 0, 00:16:30.309 "rw_mbytes_per_sec": 0, 00:16:30.309 "w_mbytes_per_sec": 0 00:16:30.309 }, 00:16:30.309 "block_size": 512, 00:16:30.309 "claimed": false, 00:16:30.309 "driver_specific": { 00:16:30.309 "mp_policy": "active_passive", 00:16:30.309 "nvme": [ 00:16:30.309 { 00:16:30.309 "ctrlr_data": { 00:16:30.309 "ana_reporting": false, 00:16:30.309 "cntlid": 1, 00:16:30.309 "firmware_revision": "24.09", 00:16:30.309 "model_number": "SPDK bdev Controller", 00:16:30.309 "multi_ctrlr": true, 00:16:30.309 "oacs": { 00:16:30.309 "firmware": 0, 00:16:30.309 "format": 0, 00:16:30.309 "ns_manage": 0, 00:16:30.310 "security": 0 00:16:30.310 }, 00:16:30.310 "serial_number": "00000000000000000000", 00:16:30.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:30.310 "vendor_id": "0x8086" 00:16:30.310 }, 00:16:30.310 "ns_data": { 00:16:30.310 "can_share": true, 00:16:30.310 "id": 1 00:16:30.310 }, 00:16:30.310 "trid": { 00:16:30.310 "adrfam": "IPv4", 00:16:30.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:30.310 "traddr": "10.0.0.2", 00:16:30.310 "trsvcid": "4420", 00:16:30.310 "trtype": "TCP" 00:16:30.310 }, 00:16:30.310 "vs": { 00:16:30.310 "nvme_version": "1.3" 00:16:30.310 } 00:16:30.310 } 00:16:30.310 ] 00:16:30.310 }, 00:16:30.310 "memory_domains": [ 00:16:30.310 { 00:16:30.310 "dma_device_id": "system", 00:16:30.310 "dma_device_type": 1 00:16:30.310 } 00:16:30.310 ], 00:16:30.310 "name": "nvme0n1", 00:16:30.310 "num_blocks": 2097152, 00:16:30.310 "product_name": "NVMe disk", 00:16:30.310 "supported_io_types": { 00:16:30.310 "abort": true, 00:16:30.310 "compare": true, 00:16:30.310 "compare_and_write": true, 00:16:30.310 "copy": true, 00:16:30.310 "flush": true, 00:16:30.310 "get_zone_info": false, 00:16:30.310 "nvme_admin": true, 00:16:30.310 "nvme_io": true, 00:16:30.310 "nvme_io_md": false, 00:16:30.310 "nvme_iov_md": false, 00:16:30.310 "read": true, 00:16:30.310 "reset": true, 00:16:30.310 "seek_data": false, 00:16:30.310 "seek_hole": false, 00:16:30.310 "unmap": false, 00:16:30.310 "write": true, 00:16:30.310 "write_zeroes": true, 00:16:30.310 "zcopy": false, 00:16:30.310 "zone_append": false, 00:16:30.310 "zone_management": false 00:16:30.310 }, 00:16:30.310 "uuid": "31753a43-65cd-4b54-ab31-9ae1f32822cf", 00:16:30.310 "zoned": false 00:16:30.310 } 00:16:30.310 ] 00:16:30.310 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.310 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:30.311 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.311 12:23:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.311 [2024-07-25 12:23:44.963945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:30.311 [2024-07-25 12:23:44.964044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2491b00 (9): Bad file descriptor 00:16:30.311 [2024-07-25 12:23:45.096458] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:30.311 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.311 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:30.311 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.311 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.311 [ 00:16:30.311 { 00:16:30.311 "aliases": [ 00:16:30.311 "31753a43-65cd-4b54-ab31-9ae1f32822cf" 00:16:30.311 ], 00:16:30.311 "assigned_rate_limits": { 00:16:30.311 "r_mbytes_per_sec": 0, 00:16:30.311 "rw_ios_per_sec": 0, 00:16:30.311 "rw_mbytes_per_sec": 0, 00:16:30.311 "w_mbytes_per_sec": 0 00:16:30.311 }, 00:16:30.311 "block_size": 512, 00:16:30.311 "claimed": false, 00:16:30.311 "driver_specific": { 00:16:30.311 "mp_policy": "active_passive", 00:16:30.311 "nvme": [ 00:16:30.311 { 00:16:30.311 "ctrlr_data": { 00:16:30.311 "ana_reporting": false, 00:16:30.311 "cntlid": 2, 00:16:30.311 "firmware_revision": "24.09", 00:16:30.311 "model_number": "SPDK bdev Controller", 00:16:30.311 "multi_ctrlr": true, 00:16:30.311 "oacs": { 00:16:30.311 "firmware": 0, 00:16:30.311 "format": 0, 00:16:30.311 "ns_manage": 0, 00:16:30.311 "security": 0 00:16:30.311 }, 00:16:30.311 "serial_number": "00000000000000000000", 00:16:30.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:30.311 "vendor_id": "0x8086" 00:16:30.311 }, 00:16:30.311 "ns_data": { 00:16:30.311 "can_share": true, 00:16:30.312 "id": 1 00:16:30.312 }, 00:16:30.312 "trid": { 00:16:30.312 "adrfam": "IPv4", 00:16:30.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:30.312 "traddr": "10.0.0.2", 00:16:30.312 "trsvcid": "4420", 00:16:30.312 "trtype": "TCP" 00:16:30.312 }, 00:16:30.312 "vs": { 00:16:30.312 "nvme_version": "1.3" 00:16:30.312 } 00:16:30.312 } 00:16:30.312 ] 00:16:30.312 }, 00:16:30.312 "memory_domains": [ 00:16:30.312 { 00:16:30.312 "dma_device_id": "system", 00:16:30.312 "dma_device_type": 1 00:16:30.312 } 00:16:30.312 ], 00:16:30.312 "name": "nvme0n1", 00:16:30.312 "num_blocks": 2097152, 00:16:30.312 "product_name": "NVMe disk", 00:16:30.312 "supported_io_types": { 00:16:30.312 "abort": true, 00:16:30.312 "compare": true, 00:16:30.312 "compare_and_write": true, 00:16:30.312 "copy": true, 00:16:30.312 "flush": true, 00:16:30.312 "get_zone_info": false, 00:16:30.312 "nvme_admin": true, 00:16:30.312 "nvme_io": true, 00:16:30.312 "nvme_io_md": false, 00:16:30.312 "nvme_iov_md": false, 00:16:30.312 "read": true, 00:16:30.312 "reset": true, 00:16:30.312 "seek_data": false, 00:16:30.312 "seek_hole": false, 00:16:30.312 "unmap": false, 00:16:30.312 "write": true, 00:16:30.312 "write_zeroes": true, 00:16:30.312 "zcopy": false, 00:16:30.312 "zone_append": false, 00:16:30.312 "zone_management": false 00:16:30.312 }, 00:16:30.312 "uuid": "31753a43-65cd-4b54-ab31-9ae1f32822cf", 00:16:30.312 "zoned": false 00:16:30.312 } 00:16:30.312 ] 00:16:30.312 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.312 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.312 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.312 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.312 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.312 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:30.312 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.gLNTuf6GFk 00:16:30.312 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:30.312 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.gLNTuf6GFk 00:16:30.312 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:30.313 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.313 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.313 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.313 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:16:30.313 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.313 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.313 [2024-07-25 12:23:45.160140] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:30.313 [2024-07-25 12:23:45.160356] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:30.313 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.313 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gLNTuf6GFk 00:16:30.313 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.313 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.313 [2024-07-25 12:23:45.168150] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gLNTuf6GFk 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.572 [2024-07-25 12:23:45.176129] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:30.572 [2024-07-25 12:23:45.176189] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:30.572 nvme0n1 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.572 [ 00:16:30.572 { 00:16:30.572 "aliases": [ 00:16:30.572 "31753a43-65cd-4b54-ab31-9ae1f32822cf" 00:16:30.572 ], 00:16:30.572 "assigned_rate_limits": { 00:16:30.572 "r_mbytes_per_sec": 0, 00:16:30.572 "rw_ios_per_sec": 0, 00:16:30.572 "rw_mbytes_per_sec": 0, 00:16:30.572 "w_mbytes_per_sec": 0 00:16:30.572 }, 00:16:30.572 "block_size": 512, 00:16:30.572 "claimed": false, 00:16:30.572 "driver_specific": { 00:16:30.572 "mp_policy": "active_passive", 00:16:30.572 "nvme": [ 00:16:30.572 { 00:16:30.572 "ctrlr_data": { 00:16:30.572 "ana_reporting": false, 00:16:30.572 "cntlid": 3, 00:16:30.572 "firmware_revision": "24.09", 00:16:30.572 "model_number": "SPDK bdev Controller", 00:16:30.572 "multi_ctrlr": true, 00:16:30.572 "oacs": { 00:16:30.572 "firmware": 0, 00:16:30.572 "format": 0, 00:16:30.572 "ns_manage": 0, 00:16:30.572 "security": 0 00:16:30.572 }, 00:16:30.572 "serial_number": "00000000000000000000", 00:16:30.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:30.572 "vendor_id": "0x8086" 00:16:30.572 }, 00:16:30.572 "ns_data": { 00:16:30.572 "can_share": true, 00:16:30.572 "id": 1 00:16:30.572 }, 00:16:30.572 "trid": { 00:16:30.572 "adrfam": "IPv4", 00:16:30.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:30.572 "traddr": "10.0.0.2", 00:16:30.572 "trsvcid": "4421", 00:16:30.572 "trtype": "TCP" 00:16:30.572 }, 00:16:30.572 "vs": { 00:16:30.572 "nvme_version": "1.3" 00:16:30.572 } 00:16:30.572 } 00:16:30.572 ] 00:16:30.572 }, 00:16:30.572 "memory_domains": [ 00:16:30.572 { 00:16:30.572 "dma_device_id": "system", 00:16:30.572 "dma_device_type": 1 00:16:30.572 } 00:16:30.572 ], 00:16:30.572 "name": "nvme0n1", 00:16:30.572 "num_blocks": 2097152, 00:16:30.572 "product_name": "NVMe disk", 00:16:30.572 "supported_io_types": { 00:16:30.572 "abort": true, 00:16:30.572 "compare": true, 00:16:30.572 "compare_and_write": true, 00:16:30.572 "copy": true, 00:16:30.572 "flush": true, 00:16:30.572 "get_zone_info": false, 00:16:30.572 "nvme_admin": true, 00:16:30.572 "nvme_io": true, 00:16:30.572 "nvme_io_md": false, 00:16:30.572 "nvme_iov_md": false, 00:16:30.572 "read": true, 00:16:30.572 "reset": true, 00:16:30.572 "seek_data": false, 00:16:30.572 "seek_hole": false, 00:16:30.572 "unmap": false, 00:16:30.572 "write": true, 00:16:30.572 "write_zeroes": true, 00:16:30.572 "zcopy": false, 00:16:30.572 "zone_append": false, 00:16:30.572 "zone_management": false 00:16:30.572 }, 00:16:30.572 "uuid": "31753a43-65cd-4b54-ab31-9ae1f32822cf", 00:16:30.572 "zoned": false 00:16:30.572 } 00:16:30.572 ] 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.gLNTuf6GFk 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:30.572 rmmod nvme_tcp 00:16:30.572 rmmod nvme_fabrics 00:16:30.572 rmmod nvme_keyring 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 85879 ']' 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 85879 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 85879 ']' 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 85879 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85879 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.572 killing process with pid 85879 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85879' 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 85879 00:16:30.572 [2024-07-25 12:23:45.426646] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:30.572 [2024-07-25 12:23:45.426696] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:30.572 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 85879 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:30.831 00:16:30.831 real 0m2.558s 00:16:30.831 user 0m2.340s 00:16:30.831 sys 0m0.598s 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.831 12:23:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:30.831 ************************************ 00:16:30.831 END TEST nvmf_async_init 00:16:30.831 ************************************ 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.090 ************************************ 00:16:31.090 START TEST dma 00:16:31.090 ************************************ 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:31.090 * Looking for test storage... 00:16:31.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:16:31.090 00:16:31.090 real 0m0.102s 00:16:31.090 user 0m0.046s 00:16:31.090 sys 0m0.063s 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:31.090 ************************************ 00:16:31.090 END TEST dma 00:16:31.090 ************************************ 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.090 ************************************ 00:16:31.090 START TEST nvmf_identify 00:16:31.090 ************************************ 00:16:31.090 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:31.090 * Looking for test storage... 00:16:31.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.350 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:31.351 12:23:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:31.351 Cannot find device "nvmf_tgt_br" 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:31.351 Cannot find device "nvmf_tgt_br2" 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:31.351 Cannot find device "nvmf_tgt_br" 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:31.351 Cannot find device "nvmf_tgt_br2" 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:31.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:31.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:31.351 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:31.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:31.610 00:16:31.610 --- 10.0.0.2 ping statistics --- 00:16:31.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.610 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:31.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:31.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:16:31.610 00:16:31.610 --- 10.0.0.3 ping statistics --- 00:16:31.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.610 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:31.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:31.610 00:16:31.610 --- 10.0.0.1 ping statistics --- 00:16:31.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.610 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:31.610 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86146 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86146 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 86146 ']' 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.611 12:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:31.611 [2024-07-25 12:23:46.407956] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:31.611 [2024-07-25 12:23:46.408085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.870 [2024-07-25 12:23:46.549482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.870 [2024-07-25 12:23:46.674927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.870 [2024-07-25 12:23:46.675022] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.870 [2024-07-25 12:23:46.675050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.870 [2024-07-25 12:23:46.675058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.870 [2024-07-25 12:23:46.675065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.870 [2024-07-25 12:23:46.675235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.870 [2024-07-25 12:23:46.675336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.870 [2024-07-25 12:23:46.675960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.870 [2024-07-25 12:23:46.676011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 [2024-07-25 12:23:47.427958] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 Malloc0 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 [2024-07-25 12:23:47.526145] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 [ 00:16:32.807 { 00:16:32.807 "allow_any_host": true, 00:16:32.807 "hosts": [], 00:16:32.807 "listen_addresses": [ 00:16:32.807 { 00:16:32.807 "adrfam": "IPv4", 00:16:32.807 "traddr": "10.0.0.2", 00:16:32.807 "trsvcid": "4420", 00:16:32.807 "trtype": "TCP" 00:16:32.807 } 00:16:32.807 ], 00:16:32.807 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:32.807 "subtype": "Discovery" 00:16:32.807 }, 00:16:32.807 { 00:16:32.807 "allow_any_host": true, 00:16:32.807 "hosts": [], 00:16:32.807 "listen_addresses": [ 00:16:32.807 { 00:16:32.807 "adrfam": "IPv4", 00:16:32.807 "traddr": "10.0.0.2", 00:16:32.807 "trsvcid": "4420", 00:16:32.807 "trtype": "TCP" 00:16:32.807 } 00:16:32.807 ], 00:16:32.807 "max_cntlid": 65519, 00:16:32.807 "max_namespaces": 32, 00:16:32.807 "min_cntlid": 1, 00:16:32.807 "model_number": "SPDK bdev Controller", 00:16:32.807 "namespaces": [ 00:16:32.807 { 00:16:32.807 "bdev_name": "Malloc0", 00:16:32.807 "eui64": "ABCDEF0123456789", 00:16:32.807 "name": "Malloc0", 00:16:32.807 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:32.807 "nsid": 1, 00:16:32.807 "uuid": "d2820f9b-9b44-4e75-b0a3-134220162b0d" 00:16:32.807 } 00:16:32.807 ], 00:16:32.807 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.807 "serial_number": "SPDK00000000000001", 00:16:32.807 "subtype": "NVMe" 00:16:32.807 } 00:16:32.807 ] 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.807 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:32.807 [2024-07-25 12:23:47.577775] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:32.807 [2024-07-25 12:23:47.577834] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86200 ] 00:16:33.068 [2024-07-25 12:23:47.713732] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:33.068 [2024-07-25 12:23:47.713824] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:33.068 [2024-07-25 12:23:47.713833] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:33.068 [2024-07-25 12:23:47.713849] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:33.068 [2024-07-25 12:23:47.713860] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:33.068 [2024-07-25 12:23:47.714030] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:33.068 [2024-07-25 12:23:47.714095] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd87a60 0 00:16:33.068 [2024-07-25 12:23:47.726316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:33.068 [2024-07-25 12:23:47.726343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:33.068 [2024-07-25 12:23:47.726350] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:33.068 [2024-07-25 12:23:47.726355] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:33.068 [2024-07-25 12:23:47.726407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.726416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.726421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87a60) 00:16:33.068 [2024-07-25 12:23:47.726437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:33.068 [2024-07-25 12:23:47.726470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca840, cid 0, qid 0 00:16:33.068 [2024-07-25 12:23:47.734319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.068 [2024-07-25 12:23:47.734343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.068 [2024-07-25 12:23:47.734349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.734355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca840) on tqpair=0xd87a60 00:16:33.068 [2024-07-25 12:23:47.734373] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:33.068 [2024-07-25 12:23:47.734383] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:33.068 [2024-07-25 12:23:47.734390] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:33.068 [2024-07-25 12:23:47.734410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.734428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.734432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87a60) 00:16:33.068 [2024-07-25 12:23:47.734443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.068 [2024-07-25 12:23:47.734474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca840, cid 0, qid 0 00:16:33.068 [2024-07-25 12:23:47.734567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.068 [2024-07-25 12:23:47.734575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.068 [2024-07-25 12:23:47.734578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.734583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca840) on tqpair=0xd87a60 00:16:33.068 [2024-07-25 12:23:47.734589] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:33.068 [2024-07-25 12:23:47.734598] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:33.068 [2024-07-25 12:23:47.734606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.734611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.734615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87a60) 00:16:33.068 [2024-07-25 12:23:47.734623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.068 [2024-07-25 12:23:47.734644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca840, cid 0, qid 0 00:16:33.068 [2024-07-25 12:23:47.734718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.068 [2024-07-25 12:23:47.734725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.068 [2024-07-25 12:23:47.734729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.734733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca840) on tqpair=0xd87a60 00:16:33.068 [2024-07-25 12:23:47.734740] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:33.068 [2024-07-25 12:23:47.734749] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:33.068 [2024-07-25 12:23:47.734757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.734761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.734765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87a60) 00:16:33.068 [2024-07-25 12:23:47.734773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.068 [2024-07-25 12:23:47.734793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca840, cid 0, qid 0 00:16:33.068 [2024-07-25 12:23:47.734868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.068 [2024-07-25 12:23:47.734875] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.068 [2024-07-25 12:23:47.734878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.068 [2024-07-25 12:23:47.734883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca840) on tqpair=0xd87a60 00:16:33.068 [2024-07-25 12:23:47.734889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:33.069 [2024-07-25 12:23:47.734900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.734905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.734909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87a60) 00:16:33.069 [2024-07-25 12:23:47.734918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.069 [2024-07-25 12:23:47.734937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca840, cid 0, qid 0 00:16:33.069 [2024-07-25 12:23:47.735005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.069 [2024-07-25 12:23:47.735012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.069 [2024-07-25 12:23:47.735016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca840) on tqpair=0xd87a60 00:16:33.069 [2024-07-25 12:23:47.735026] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:33.069 [2024-07-25 12:23:47.735032] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:33.069 [2024-07-25 12:23:47.735041] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:33.069 [2024-07-25 12:23:47.735147] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:33.069 [2024-07-25 12:23:47.735162] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:33.069 [2024-07-25 12:23:47.735183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87a60) 00:16:33.069 [2024-07-25 12:23:47.735201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.069 [2024-07-25 12:23:47.735228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca840, cid 0, qid 0 00:16:33.069 [2024-07-25 12:23:47.735328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.069 [2024-07-25 12:23:47.735337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.069 [2024-07-25 12:23:47.735341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca840) on tqpair=0xd87a60 00:16:33.069 [2024-07-25 12:23:47.735351] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:33.069 [2024-07-25 12:23:47.735363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87a60) 00:16:33.069 [2024-07-25 12:23:47.735380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.069 [2024-07-25 12:23:47.735407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca840, cid 0, qid 0 00:16:33.069 [2024-07-25 12:23:47.735473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.069 [2024-07-25 12:23:47.735481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.069 [2024-07-25 12:23:47.735484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca840) on tqpair=0xd87a60 00:16:33.069 [2024-07-25 12:23:47.735494] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:33.069 [2024-07-25 12:23:47.735499] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:33.069 [2024-07-25 12:23:47.735507] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:33.069 [2024-07-25 12:23:47.735519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:33.069 [2024-07-25 12:23:47.735530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87a60) 00:16:33.069 [2024-07-25 12:23:47.735544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.069 [2024-07-25 12:23:47.735565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca840, cid 0, qid 0 00:16:33.069 [2024-07-25 12:23:47.735665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.069 [2024-07-25 12:23:47.735673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.069 [2024-07-25 12:23:47.735677] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735681] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87a60): datao=0, datal=4096, cccid=0 00:16:33.069 [2024-07-25 12:23:47.735687] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdca840) on tqpair(0xd87a60): expected_datao=0, payload_size=4096 00:16:33.069 [2024-07-25 12:23:47.735692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735701] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735705] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.069 [2024-07-25 12:23:47.735721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.069 [2024-07-25 12:23:47.735724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca840) on tqpair=0xd87a60 00:16:33.069 [2024-07-25 12:23:47.735738] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:33.069 [2024-07-25 12:23:47.735744] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:33.069 [2024-07-25 12:23:47.735749] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:33.069 [2024-07-25 12:23:47.735759] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:33.069 [2024-07-25 12:23:47.735765] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:33.069 [2024-07-25 12:23:47.735770] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:33.069 [2024-07-25 12:23:47.735779] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:33.069 [2024-07-25 12:23:47.735799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87a60) 00:16:33.069 [2024-07-25 12:23:47.735816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.069 [2024-07-25 12:23:47.735838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca840, cid 0, qid 0 00:16:33.069 [2024-07-25 12:23:47.735901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.069 [2024-07-25 12:23:47.735909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.069 [2024-07-25 12:23:47.735912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca840) on tqpair=0xd87a60 00:16:33.069 [2024-07-25 12:23:47.735925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd87a60) 00:16:33.069 [2024-07-25 12:23:47.735942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.069 [2024-07-25 12:23:47.735949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd87a60) 00:16:33.069 [2024-07-25 12:23:47.735963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.069 [2024-07-25 12:23:47.735970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd87a60) 00:16:33.069 [2024-07-25 12:23:47.735984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.069 [2024-07-25 12:23:47.735991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.735996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.069 [2024-07-25 12:23:47.736000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.069 [2024-07-25 12:23:47.736006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.070 [2024-07-25 12:23:47.736011] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:33.070 [2024-07-25 12:23:47.736020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:33.070 [2024-07-25 12:23:47.736028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.736033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87a60) 00:16:33.070 [2024-07-25 12:23:47.736040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.070 [2024-07-25 12:23:47.736067] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca840, cid 0, qid 0 00:16:33.070 [2024-07-25 12:23:47.736074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdca9c0, cid 1, qid 0 00:16:33.070 [2024-07-25 12:23:47.736080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcab40, cid 2, qid 0 00:16:33.070 [2024-07-25 12:23:47.736085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.070 [2024-07-25 12:23:47.736090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcae40, cid 4, qid 0 00:16:33.070 [2024-07-25 12:23:47.736203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.070 [2024-07-25 12:23:47.736210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.070 [2024-07-25 12:23:47.736214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.736218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcae40) on tqpair=0xd87a60 00:16:33.070 [2024-07-25 12:23:47.736224] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:33.070 [2024-07-25 12:23:47.736230] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:33.070 [2024-07-25 12:23:47.736242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.736247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87a60) 00:16:33.070 [2024-07-25 12:23:47.736255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.070 [2024-07-25 12:23:47.736274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcae40, cid 4, qid 0 00:16:33.070 [2024-07-25 12:23:47.740321] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.070 [2024-07-25 12:23:47.740341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.070 [2024-07-25 12:23:47.740346] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740350] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87a60): datao=0, datal=4096, cccid=4 00:16:33.070 [2024-07-25 12:23:47.740356] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcae40) on tqpair(0xd87a60): expected_datao=0, payload_size=4096 00:16:33.070 [2024-07-25 12:23:47.740362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740370] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740374] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.070 [2024-07-25 12:23:47.740387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.070 [2024-07-25 12:23:47.740390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcae40) on tqpair=0xd87a60 00:16:33.070 [2024-07-25 12:23:47.740413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:33.070 [2024-07-25 12:23:47.740448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87a60) 00:16:33.070 [2024-07-25 12:23:47.740465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.070 [2024-07-25 12:23:47.740474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd87a60) 00:16:33.070 [2024-07-25 12:23:47.740489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.070 [2024-07-25 12:23:47.740522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcae40, cid 4, qid 0 00:16:33.070 [2024-07-25 12:23:47.740530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcafc0, cid 5, qid 0 00:16:33.070 [2024-07-25 12:23:47.740664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.070 [2024-07-25 12:23:47.740671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.070 [2024-07-25 12:23:47.740675] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740679] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87a60): datao=0, datal=1024, cccid=4 00:16:33.070 [2024-07-25 12:23:47.740695] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcae40) on tqpair(0xd87a60): expected_datao=0, payload_size=1024 00:16:33.070 [2024-07-25 12:23:47.740700] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740707] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740711] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.070 [2024-07-25 12:23:47.740724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.070 [2024-07-25 12:23:47.740727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.740732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcafc0) on tqpair=0xd87a60 00:16:33.070 [2024-07-25 12:23:47.782385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.070 [2024-07-25 12:23:47.782428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.070 [2024-07-25 12:23:47.782434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcae40) on tqpair=0xd87a60 00:16:33.070 [2024-07-25 12:23:47.782467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87a60) 00:16:33.070 [2024-07-25 12:23:47.782489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.070 [2024-07-25 12:23:47.782533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcae40, cid 4, qid 0 00:16:33.070 [2024-07-25 12:23:47.782632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.070 [2024-07-25 12:23:47.782640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.070 [2024-07-25 12:23:47.782644] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782648] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87a60): datao=0, datal=3072, cccid=4 00:16:33.070 [2024-07-25 12:23:47.782653] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcae40) on tqpair(0xd87a60): expected_datao=0, payload_size=3072 00:16:33.070 [2024-07-25 12:23:47.782658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782667] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782672] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.070 [2024-07-25 12:23:47.782687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.070 [2024-07-25 12:23:47.782691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcae40) on tqpair=0xd87a60 00:16:33.070 [2024-07-25 12:23:47.782710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd87a60) 00:16:33.070 [2024-07-25 12:23:47.782730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.070 [2024-07-25 12:23:47.782758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcae40, cid 4, qid 0 00:16:33.070 [2024-07-25 12:23:47.782838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.070 [2024-07-25 12:23:47.782845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.070 [2024-07-25 12:23:47.782849] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782853] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd87a60): datao=0, datal=8, cccid=4 00:16:33.070 [2024-07-25 12:23:47.782858] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdcae40) on tqpair(0xd87a60): expected_datao=0, payload_size=8 00:16:33.070 [2024-07-25 12:23:47.782871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782879] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.070 [2024-07-25 12:23:47.782883] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.070 ===================================================== 00:16:33.070 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:33.070 ===================================================== 00:16:33.070 Controller Capabilities/Features 00:16:33.070 ================================ 00:16:33.070 Vendor ID: 0000 00:16:33.070 Subsystem Vendor ID: 0000 00:16:33.070 Serial Number: .................... 00:16:33.070 Model Number: ........................................ 00:16:33.070 Firmware Version: 24.09 00:16:33.070 Recommended Arb Burst: 0 00:16:33.070 IEEE OUI Identifier: 00 00 00 00:16:33.070 Multi-path I/O 00:16:33.070 May have multiple subsystem ports: No 00:16:33.070 May have multiple controllers: No 00:16:33.070 Associated with SR-IOV VF: No 00:16:33.070 Max Data Transfer Size: 131072 00:16:33.070 Max Number of Namespaces: 0 00:16:33.070 Max Number of I/O Queues: 1024 00:16:33.070 NVMe Specification Version (VS): 1.3 00:16:33.070 NVMe Specification Version (Identify): 1.3 00:16:33.070 Maximum Queue Entries: 128 00:16:33.070 Contiguous Queues Required: Yes 00:16:33.070 Arbitration Mechanisms Supported 00:16:33.070 Weighted Round Robin: Not Supported 00:16:33.070 Vendor Specific: Not Supported 00:16:33.071 Reset Timeout: 15000 ms 00:16:33.071 Doorbell Stride: 4 bytes 00:16:33.071 NVM Subsystem Reset: Not Supported 00:16:33.071 Command Sets Supported 00:16:33.071 NVM Command Set: Supported 00:16:33.071 Boot Partition: Not Supported 00:16:33.071 Memory Page Size Minimum: 4096 bytes 00:16:33.071 Memory Page Size Maximum: 4096 bytes 00:16:33.071 Persistent Memory Region: Not Supported 00:16:33.071 Optional Asynchronous Events Supported 00:16:33.071 Namespace Attribute Notices: Not Supported 00:16:33.071 Firmware Activation Notices: Not Supported 00:16:33.071 ANA Change Notices: Not Supported 00:16:33.071 PLE Aggregate Log Change Notices: Not Supported 00:16:33.071 LBA Status Info Alert Notices: Not Supported 00:16:33.071 EGE Aggregate Log Change Notices: Not Supported 00:16:33.071 Normal NVM Subsystem Shutdown event: Not Supported 00:16:33.071 Zone Descriptor Change Notices: Not Supported 00:16:33.071 Discovery Log Change Notices: Supported 00:16:33.071 Controller Attributes 00:16:33.071 128-bit Host Identifier: Not Supported 00:16:33.071 Non-Operational Permissive Mode: Not Supported 00:16:33.071 NVM Sets: Not Supported 00:16:33.071 Read Recovery Levels: Not Supported 00:16:33.071 Endurance Groups: Not Supported 00:16:33.071 Predictable Latency Mode: Not Supported 00:16:33.071 Traffic Based Keep ALive: Not Supported 00:16:33.071 Namespace Granularity: Not Supported 00:16:33.071 SQ Associations: Not Supported 00:16:33.071 UUID List: Not Supported 00:16:33.071 Multi-Domain Subsystem: Not Supported 00:16:33.071 Fixed Capacity Management: Not Supported 00:16:33.071 Variable Capacity Management: Not Supported 00:16:33.071 Delete Endurance Group: Not Supported 00:16:33.071 Delete NVM Set: Not Supported 00:16:33.071 Extended LBA Formats Supported: Not Supported 00:16:33.071 Flexible Data Placement Supported: Not Supported 00:16:33.071 00:16:33.071 Controller Memory Buffer Support 00:16:33.071 ================================ 00:16:33.071 Supported: No 00:16:33.071 00:16:33.071 Persistent Memory Region Support 00:16:33.071 ================================ 00:16:33.071 Supported: No 00:16:33.071 00:16:33.071 Admin Command Set Attributes 00:16:33.071 ============================ 00:16:33.071 Security Send/Receive: Not Supported 00:16:33.071 Format NVM: Not Supported 00:16:33.071 Firmware Activate/Download: Not Supported 00:16:33.071 Namespace Management: Not Supported 00:16:33.071 Device Self-Test: Not Supported 00:16:33.071 Directives: Not Supported 00:16:33.071 NVMe-MI: Not Supported 00:16:33.071 Virtualization Management: Not Supported 00:16:33.071 Doorbell Buffer Config: Not Supported 00:16:33.071 Get LBA Status Capability: Not Supported 00:16:33.071 Command & Feature Lockdown Capability: Not Supported 00:16:33.071 Abort Command Limit: 1 00:16:33.071 Async Event Request Limit: 4 00:16:33.071 Number of Firmware Slots: N/A 00:16:33.071 Firmware Slot 1 Read-Only: N/A 00:16:33.071 Firm[2024-07-25 12:23:47.824376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.071 [2024-07-25 12:23:47.824412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.071 [2024-07-25 12:23:47.824418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.071 [2024-07-25 12:23:47.824425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcae40) on tqpair=0xd87a60 00:16:33.071 ware Activation Without Reset: N/A 00:16:33.071 Multiple Update Detection Support: N/A 00:16:33.071 Firmware Update Granularity: No Information Provided 00:16:33.071 Per-Namespace SMART Log: No 00:16:33.071 Asymmetric Namespace Access Log Page: Not Supported 00:16:33.071 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:33.071 Command Effects Log Page: Not Supported 00:16:33.071 Get Log Page Extended Data: Supported 00:16:33.071 Telemetry Log Pages: Not Supported 00:16:33.071 Persistent Event Log Pages: Not Supported 00:16:33.071 Supported Log Pages Log Page: May Support 00:16:33.071 Commands Supported & Effects Log Page: Not Supported 00:16:33.071 Feature Identifiers & Effects Log Page:May Support 00:16:33.071 NVMe-MI Commands & Effects Log Page: May Support 00:16:33.071 Data Area 4 for Telemetry Log: Not Supported 00:16:33.071 Error Log Page Entries Supported: 128 00:16:33.071 Keep Alive: Not Supported 00:16:33.071 00:16:33.071 NVM Command Set Attributes 00:16:33.071 ========================== 00:16:33.071 Submission Queue Entry Size 00:16:33.071 Max: 1 00:16:33.071 Min: 1 00:16:33.071 Completion Queue Entry Size 00:16:33.071 Max: 1 00:16:33.071 Min: 1 00:16:33.071 Number of Namespaces: 0 00:16:33.071 Compare Command: Not Supported 00:16:33.071 Write Uncorrectable Command: Not Supported 00:16:33.071 Dataset Management Command: Not Supported 00:16:33.071 Write Zeroes Command: Not Supported 00:16:33.071 Set Features Save Field: Not Supported 00:16:33.071 Reservations: Not Supported 00:16:33.071 Timestamp: Not Supported 00:16:33.071 Copy: Not Supported 00:16:33.071 Volatile Write Cache: Not Present 00:16:33.071 Atomic Write Unit (Normal): 1 00:16:33.071 Atomic Write Unit (PFail): 1 00:16:33.071 Atomic Compare & Write Unit: 1 00:16:33.071 Fused Compare & Write: Supported 00:16:33.071 Scatter-Gather List 00:16:33.071 SGL Command Set: Supported 00:16:33.071 SGL Keyed: Supported 00:16:33.071 SGL Bit Bucket Descriptor: Not Supported 00:16:33.071 SGL Metadata Pointer: Not Supported 00:16:33.071 Oversized SGL: Not Supported 00:16:33.071 SGL Metadata Address: Not Supported 00:16:33.071 SGL Offset: Supported 00:16:33.071 Transport SGL Data Block: Not Supported 00:16:33.071 Replay Protected Memory Block: Not Supported 00:16:33.071 00:16:33.071 Firmware Slot Information 00:16:33.071 ========================= 00:16:33.071 Active slot: 0 00:16:33.071 00:16:33.071 00:16:33.071 Error Log 00:16:33.071 ========= 00:16:33.071 00:16:33.071 Active Namespaces 00:16:33.071 ================= 00:16:33.071 Discovery Log Page 00:16:33.071 ================== 00:16:33.071 Generation Counter: 2 00:16:33.071 Number of Records: 2 00:16:33.071 Record Format: 0 00:16:33.071 00:16:33.071 Discovery Log Entry 0 00:16:33.071 ---------------------- 00:16:33.071 Transport Type: 3 (TCP) 00:16:33.071 Address Family: 1 (IPv4) 00:16:33.071 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:33.071 Entry Flags: 00:16:33.071 Duplicate Returned Information: 1 00:16:33.071 Explicit Persistent Connection Support for Discovery: 1 00:16:33.071 Transport Requirements: 00:16:33.071 Secure Channel: Not Required 00:16:33.071 Port ID: 0 (0x0000) 00:16:33.071 Controller ID: 65535 (0xffff) 00:16:33.071 Admin Max SQ Size: 128 00:16:33.071 Transport Service Identifier: 4420 00:16:33.071 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:33.071 Transport Address: 10.0.0.2 00:16:33.071 Discovery Log Entry 1 00:16:33.071 ---------------------- 00:16:33.071 Transport Type: 3 (TCP) 00:16:33.071 Address Family: 1 (IPv4) 00:16:33.071 Subsystem Type: 2 (NVM Subsystem) 00:16:33.071 Entry Flags: 00:16:33.071 Duplicate Returned Information: 0 00:16:33.071 Explicit Persistent Connection Support for Discovery: 0 00:16:33.071 Transport Requirements: 00:16:33.071 Secure Channel: Not Required 00:16:33.071 Port ID: 0 (0x0000) 00:16:33.071 Controller ID: 65535 (0xffff) 00:16:33.071 Admin Max SQ Size: 128 00:16:33.071 Transport Service Identifier: 4420 00:16:33.071 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:33.071 Transport Address: 10.0.0.2 [2024-07-25 12:23:47.824557] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:33.071 [2024-07-25 12:23:47.824572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca840) on tqpair=0xd87a60 00:16:33.071 [2024-07-25 12:23:47.824581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.071 [2024-07-25 12:23:47.824588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdca9c0) on tqpair=0xd87a60 00:16:33.071 [2024-07-25 12:23:47.824593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.071 [2024-07-25 12:23:47.824599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcab40) on tqpair=0xd87a60 00:16:33.071 [2024-07-25 12:23:47.824604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.071 [2024-07-25 12:23:47.824609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.071 [2024-07-25 12:23:47.824614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.071 [2024-07-25 12:23:47.824630] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.071 [2024-07-25 12:23:47.824635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.824640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.824652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.824692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.824766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.824774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.824787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.824791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.824806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.824812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.824816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.824824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.824852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.824942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.824950] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.824954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.824958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.824964] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:33.072 [2024-07-25 12:23:47.824969] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:33.072 [2024-07-25 12:23:47.824981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.824986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.824990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.824998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.825017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.825104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.825111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.825115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.825131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.825148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.825167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.825237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.825245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.825248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.825264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.825280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.825312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.825379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.825387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.825390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.825406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.825424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.825445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.825503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.825510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.825513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.825529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.825546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.825564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.825631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.825638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.825642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.825657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.825674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.825692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.825757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.825764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.825768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.825783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.825799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.825818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.825884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.825891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.825895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.825910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.825919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.825927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.825946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.826008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.826015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.826018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.826023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.826033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.826039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.826043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.826050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.826080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.072 [2024-07-25 12:23:47.826143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.072 [2024-07-25 12:23:47.826150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.072 [2024-07-25 12:23:47.826154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.826159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.072 [2024-07-25 12:23:47.826169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.826175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.072 [2024-07-25 12:23:47.826179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.072 [2024-07-25 12:23:47.826186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.072 [2024-07-25 12:23:47.826204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.073 [2024-07-25 12:23:47.826271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.073 [2024-07-25 12:23:47.826290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.073 [2024-07-25 12:23:47.830326] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.073 [2024-07-25 12:23:47.830336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.073 [2024-07-25 12:23:47.830353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.073 [2024-07-25 12:23:47.830360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.073 [2024-07-25 12:23:47.830364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd87a60) 00:16:33.073 [2024-07-25 12:23:47.830373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.073 [2024-07-25 12:23:47.830401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdcacc0, cid 3, qid 0 00:16:33.073 [2024-07-25 12:23:47.830478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.073 [2024-07-25 12:23:47.830485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.073 [2024-07-25 12:23:47.830489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.073 [2024-07-25 12:23:47.830493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdcacc0) on tqpair=0xd87a60 00:16:33.073 [2024-07-25 12:23:47.830502] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:16:33.073 00:16:33.073 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:33.073 [2024-07-25 12:23:47.871624] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:33.073 [2024-07-25 12:23:47.871674] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86202 ] 00:16:33.334 [2024-07-25 12:23:48.009748] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:33.334 [2024-07-25 12:23:48.009836] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:33.334 [2024-07-25 12:23:48.009844] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:33.334 [2024-07-25 12:23:48.009857] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:33.334 [2024-07-25 12:23:48.009869] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:33.334 [2024-07-25 12:23:48.010057] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:33.334 [2024-07-25 12:23:48.010109] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1eaba60 0 00:16:33.334 [2024-07-25 12:23:48.022322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:33.334 [2024-07-25 12:23:48.022349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:33.334 [2024-07-25 12:23:48.022356] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:33.334 [2024-07-25 12:23:48.022360] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:33.334 [2024-07-25 12:23:48.022412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.334 [2024-07-25 12:23:48.022420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.334 [2024-07-25 12:23:48.022425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eaba60) 00:16:33.334 [2024-07-25 12:23:48.022441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:33.334 [2024-07-25 12:23:48.022476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee840, cid 0, qid 0 00:16:33.334 [2024-07-25 12:23:48.030319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.334 [2024-07-25 12:23:48.030342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.334 [2024-07-25 12:23:48.030347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.334 [2024-07-25 12:23:48.030354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee840) on tqpair=0x1eaba60 00:16:33.334 [2024-07-25 12:23:48.030365] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:33.334 [2024-07-25 12:23:48.030374] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:33.334 [2024-07-25 12:23:48.030381] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:33.335 [2024-07-25 12:23:48.030401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eaba60) 00:16:33.335 [2024-07-25 12:23:48.030421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.335 [2024-07-25 12:23:48.030452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee840, cid 0, qid 0 00:16:33.335 [2024-07-25 12:23:48.030554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.335 [2024-07-25 12:23:48.030562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.335 [2024-07-25 12:23:48.030566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee840) on tqpair=0x1eaba60 00:16:33.335 [2024-07-25 12:23:48.030577] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:33.335 [2024-07-25 12:23:48.030585] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:33.335 [2024-07-25 12:23:48.030593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eaba60) 00:16:33.335 [2024-07-25 12:23:48.030609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.335 [2024-07-25 12:23:48.030631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee840, cid 0, qid 0 00:16:33.335 [2024-07-25 12:23:48.030706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.335 [2024-07-25 12:23:48.030714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.335 [2024-07-25 12:23:48.030718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee840) on tqpair=0x1eaba60 00:16:33.335 [2024-07-25 12:23:48.030737] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:33.335 [2024-07-25 12:23:48.030746] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:33.335 [2024-07-25 12:23:48.030764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eaba60) 00:16:33.335 [2024-07-25 12:23:48.030780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.335 [2024-07-25 12:23:48.030802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee840, cid 0, qid 0 00:16:33.335 [2024-07-25 12:23:48.030873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.335 [2024-07-25 12:23:48.030881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.335 [2024-07-25 12:23:48.030885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee840) on tqpair=0x1eaba60 00:16:33.335 [2024-07-25 12:23:48.030896] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:33.335 [2024-07-25 12:23:48.030907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.030915] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eaba60) 00:16:33.335 [2024-07-25 12:23:48.030923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.335 [2024-07-25 12:23:48.030944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee840, cid 0, qid 0 00:16:33.335 [2024-07-25 12:23:48.031011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.335 [2024-07-25 12:23:48.031018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.335 [2024-07-25 12:23:48.031022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee840) on tqpair=0x1eaba60 00:16:33.335 [2024-07-25 12:23:48.031032] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:33.335 [2024-07-25 12:23:48.031037] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:33.335 [2024-07-25 12:23:48.031046] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:33.335 [2024-07-25 12:23:48.031152] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:33.335 [2024-07-25 12:23:48.031157] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:33.335 [2024-07-25 12:23:48.031166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eaba60) 00:16:33.335 [2024-07-25 12:23:48.031183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.335 [2024-07-25 12:23:48.031204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee840, cid 0, qid 0 00:16:33.335 [2024-07-25 12:23:48.031273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.335 [2024-07-25 12:23:48.031280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.335 [2024-07-25 12:23:48.031284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee840) on tqpair=0x1eaba60 00:16:33.335 [2024-07-25 12:23:48.031308] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:33.335 [2024-07-25 12:23:48.031321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eaba60) 00:16:33.335 [2024-07-25 12:23:48.031339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.335 [2024-07-25 12:23:48.031362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee840, cid 0, qid 0 00:16:33.335 [2024-07-25 12:23:48.031480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.335 [2024-07-25 12:23:48.031494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.335 [2024-07-25 12:23:48.031499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee840) on tqpair=0x1eaba60 00:16:33.335 [2024-07-25 12:23:48.031508] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:33.335 [2024-07-25 12:23:48.031514] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:33.335 [2024-07-25 12:23:48.031522] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:33.335 [2024-07-25 12:23:48.031534] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:33.335 [2024-07-25 12:23:48.031545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eaba60) 00:16:33.335 [2024-07-25 12:23:48.031557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.335 [2024-07-25 12:23:48.031579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee840, cid 0, qid 0 00:16:33.335 [2024-07-25 12:23:48.031708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.335 [2024-07-25 12:23:48.031716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.335 [2024-07-25 12:23:48.031720] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031725] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eaba60): datao=0, datal=4096, cccid=0 00:16:33.335 [2024-07-25 12:23:48.031730] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eee840) on tqpair(0x1eaba60): expected_datao=0, payload_size=4096 00:16:33.335 [2024-07-25 12:23:48.031743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031752] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031756] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.335 [2024-07-25 12:23:48.031766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.336 [2024-07-25 12:23:48.031772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.336 [2024-07-25 12:23:48.031776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.031780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee840) on tqpair=0x1eaba60 00:16:33.336 [2024-07-25 12:23:48.031790] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:33.336 [2024-07-25 12:23:48.031795] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:33.336 [2024-07-25 12:23:48.031801] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:33.336 [2024-07-25 12:23:48.031810] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:33.336 [2024-07-25 12:23:48.031827] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:33.336 [2024-07-25 12:23:48.031832] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.031842] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.031851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.031856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.031860] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eaba60) 00:16:33.336 [2024-07-25 12:23:48.031868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.336 [2024-07-25 12:23:48.031890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee840, cid 0, qid 0 00:16:33.336 [2024-07-25 12:23:48.031969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.336 [2024-07-25 12:23:48.031976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.336 [2024-07-25 12:23:48.031980] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.031984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee840) on tqpair=0x1eaba60 00:16:33.336 [2024-07-25 12:23:48.031993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.031998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eaba60) 00:16:33.336 [2024-07-25 12:23:48.032009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.336 [2024-07-25 12:23:48.032020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1eaba60) 00:16:33.336 [2024-07-25 12:23:48.032034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.336 [2024-07-25 12:23:48.032040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1eaba60) 00:16:33.336 [2024-07-25 12:23:48.032054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.336 [2024-07-25 12:23:48.032060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.336 [2024-07-25 12:23:48.032074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.336 [2024-07-25 12:23:48.032081] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.032091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.032107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eaba60) 00:16:33.336 [2024-07-25 12:23:48.032119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.336 [2024-07-25 12:23:48.032166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee840, cid 0, qid 0 00:16:33.336 [2024-07-25 12:23:48.032174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eee9c0, cid 1, qid 0 00:16:33.336 [2024-07-25 12:23:48.032179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeeb40, cid 2, qid 0 00:16:33.336 [2024-07-25 12:23:48.032184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.336 [2024-07-25 12:23:48.032189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeee40, cid 4, qid 0 00:16:33.336 [2024-07-25 12:23:48.032310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.336 [2024-07-25 12:23:48.032320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.336 [2024-07-25 12:23:48.032324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeee40) on tqpair=0x1eaba60 00:16:33.336 [2024-07-25 12:23:48.032334] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:33.336 [2024-07-25 12:23:48.032340] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.032350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.032357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.032364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eaba60) 00:16:33.336 [2024-07-25 12:23:48.032381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.336 [2024-07-25 12:23:48.032404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeee40, cid 4, qid 0 00:16:33.336 [2024-07-25 12:23:48.032505] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.336 [2024-07-25 12:23:48.032513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.336 [2024-07-25 12:23:48.032517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeee40) on tqpair=0x1eaba60 00:16:33.336 [2024-07-25 12:23:48.032589] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.032602] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.032611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eaba60) 00:16:33.336 [2024-07-25 12:23:48.032623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.336 [2024-07-25 12:23:48.032646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeee40, cid 4, qid 0 00:16:33.336 [2024-07-25 12:23:48.032772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.336 [2024-07-25 12:23:48.032781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.336 [2024-07-25 12:23:48.032785] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032789] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eaba60): datao=0, datal=4096, cccid=4 00:16:33.336 [2024-07-25 12:23:48.032794] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eeee40) on tqpair(0x1eaba60): expected_datao=0, payload_size=4096 00:16:33.336 [2024-07-25 12:23:48.032799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032807] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032811] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.336 [2024-07-25 12:23:48.032826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.336 [2024-07-25 12:23:48.032830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeee40) on tqpair=0x1eaba60 00:16:33.336 [2024-07-25 12:23:48.032846] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:33.336 [2024-07-25 12:23:48.032860] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.032871] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:33.336 [2024-07-25 12:23:48.032879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.032884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eaba60) 00:16:33.336 [2024-07-25 12:23:48.032891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.336 [2024-07-25 12:23:48.032915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeee40, cid 4, qid 0 00:16:33.336 [2024-07-25 12:23:48.033015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.336 [2024-07-25 12:23:48.033022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.336 [2024-07-25 12:23:48.033026] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.033030] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eaba60): datao=0, datal=4096, cccid=4 00:16:33.336 [2024-07-25 12:23:48.033035] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eeee40) on tqpair(0x1eaba60): expected_datao=0, payload_size=4096 00:16:33.336 [2024-07-25 12:23:48.033040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.033047] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.033051] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.336 [2024-07-25 12:23:48.033060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.337 [2024-07-25 12:23:48.033067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.337 [2024-07-25 12:23:48.033070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeee40) on tqpair=0x1eaba60 00:16:33.337 [2024-07-25 12:23:48.033091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:33.337 [2024-07-25 12:23:48.033104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:33.337 [2024-07-25 12:23:48.033113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eaba60) 00:16:33.337 [2024-07-25 12:23:48.033125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.337 [2024-07-25 12:23:48.033148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeee40, cid 4, qid 0 00:16:33.337 [2024-07-25 12:23:48.033229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.337 [2024-07-25 12:23:48.033236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.337 [2024-07-25 12:23:48.033240] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033244] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eaba60): datao=0, datal=4096, cccid=4 00:16:33.337 [2024-07-25 12:23:48.033249] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eeee40) on tqpair(0x1eaba60): expected_datao=0, payload_size=4096 00:16:33.337 [2024-07-25 12:23:48.033254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033261] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033265] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.337 [2024-07-25 12:23:48.033281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.337 [2024-07-25 12:23:48.033284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeee40) on tqpair=0x1eaba60 00:16:33.337 [2024-07-25 12:23:48.033311] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:33.337 [2024-07-25 12:23:48.033324] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:33.337 [2024-07-25 12:23:48.033335] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:33.337 [2024-07-25 12:23:48.033342] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:33.337 [2024-07-25 12:23:48.033348] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:33.337 [2024-07-25 12:23:48.033353] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:33.337 [2024-07-25 12:23:48.033359] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:33.337 [2024-07-25 12:23:48.033364] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:33.337 [2024-07-25 12:23:48.033370] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:33.337 [2024-07-25 12:23:48.033387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eaba60) 00:16:33.337 [2024-07-25 12:23:48.033400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.337 [2024-07-25 12:23:48.033408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1eaba60) 00:16:33.337 [2024-07-25 12:23:48.033423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.337 [2024-07-25 12:23:48.033476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeee40, cid 4, qid 0 00:16:33.337 [2024-07-25 12:23:48.033484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeefc0, cid 5, qid 0 00:16:33.337 [2024-07-25 12:23:48.033568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.337 [2024-07-25 12:23:48.033575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.337 [2024-07-25 12:23:48.033579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeee40) on tqpair=0x1eaba60 00:16:33.337 [2024-07-25 12:23:48.033591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.337 [2024-07-25 12:23:48.033597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.337 [2024-07-25 12:23:48.033601] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeefc0) on tqpair=0x1eaba60 00:16:33.337 [2024-07-25 12:23:48.033616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1eaba60) 00:16:33.337 [2024-07-25 12:23:48.033628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.337 [2024-07-25 12:23:48.033649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeefc0, cid 5, qid 0 00:16:33.337 [2024-07-25 12:23:48.033718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.337 [2024-07-25 12:23:48.033725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.337 [2024-07-25 12:23:48.033729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeefc0) on tqpair=0x1eaba60 00:16:33.337 [2024-07-25 12:23:48.033745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1eaba60) 00:16:33.337 [2024-07-25 12:23:48.033757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.337 [2024-07-25 12:23:48.033776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeefc0, cid 5, qid 0 00:16:33.337 [2024-07-25 12:23:48.033841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.337 [2024-07-25 12:23:48.033848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.337 [2024-07-25 12:23:48.033852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeefc0) on tqpair=0x1eaba60 00:16:33.337 [2024-07-25 12:23:48.033867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1eaba60) 00:16:33.337 [2024-07-25 12:23:48.033879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.337 [2024-07-25 12:23:48.033899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeefc0, cid 5, qid 0 00:16:33.337 [2024-07-25 12:23:48.033969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.337 [2024-07-25 12:23:48.033977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.337 [2024-07-25 12:23:48.033980] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.033985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeefc0) on tqpair=0x1eaba60 00:16:33.337 [2024-07-25 12:23:48.034005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.034011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1eaba60) 00:16:33.337 [2024-07-25 12:23:48.034019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.337 [2024-07-25 12:23:48.034027] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.034031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eaba60) 00:16:33.337 [2024-07-25 12:23:48.034037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.337 [2024-07-25 12:23:48.034045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.034049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1eaba60) 00:16:33.337 [2024-07-25 12:23:48.034056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.337 [2024-07-25 12:23:48.034064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.034068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1eaba60) 00:16:33.337 [2024-07-25 12:23:48.034075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.337 [2024-07-25 12:23:48.034108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeefc0, cid 5, qid 0 00:16:33.337 [2024-07-25 12:23:48.034116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeee40, cid 4, qid 0 00:16:33.337 [2024-07-25 12:23:48.034121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eef140, cid 6, qid 0 00:16:33.337 [2024-07-25 12:23:48.034126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eef2c0, cid 7, qid 0 00:16:33.337 [2024-07-25 12:23:48.034320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.337 [2024-07-25 12:23:48.034342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.337 [2024-07-25 12:23:48.034347] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.337 [2024-07-25 12:23:48.034352] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eaba60): datao=0, datal=8192, cccid=5 00:16:33.338 [2024-07-25 12:23:48.034357] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eeefc0) on tqpair(0x1eaba60): expected_datao=0, payload_size=8192 00:16:33.338 [2024-07-25 12:23:48.034362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034381] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034387] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.338 [2024-07-25 12:23:48.034399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.338 [2024-07-25 12:23:48.034403] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034407] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eaba60): datao=0, datal=512, cccid=4 00:16:33.338 [2024-07-25 12:23:48.034412] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eeee40) on tqpair(0x1eaba60): expected_datao=0, payload_size=512 00:16:33.338 [2024-07-25 12:23:48.034416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034423] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034427] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.338 [2024-07-25 12:23:48.034450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.338 [2024-07-25 12:23:48.034453] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034457] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eaba60): datao=0, datal=512, cccid=6 00:16:33.338 [2024-07-25 12:23:48.034462] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eef140) on tqpair(0x1eaba60): expected_datao=0, payload_size=512 00:16:33.338 [2024-07-25 12:23:48.034466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034472] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034483] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:33.338 [2024-07-25 12:23:48.034494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:33.338 [2024-07-25 12:23:48.034498] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034501] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eaba60): datao=0, datal=4096, cccid=7 00:16:33.338 [2024-07-25 12:23:48.034506] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eef2c0) on tqpair(0x1eaba60): expected_datao=0, payload_size=4096 00:16:33.338 [2024-07-25 12:23:48.034511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034517] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034521] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.338 [2024-07-25 12:23:48.034537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.338 [2024-07-25 12:23:48.034540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeefc0) on tqpair=0x1eaba60 00:16:33.338 ===================================================== 00:16:33.338 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:33.338 ===================================================== 00:16:33.338 Controller Capabilities/Features 00:16:33.338 ================================ 00:16:33.338 Vendor ID: 8086 00:16:33.338 Subsystem Vendor ID: 8086 00:16:33.338 Serial Number: SPDK00000000000001 00:16:33.338 Model Number: SPDK bdev Controller 00:16:33.338 Firmware Version: 24.09 00:16:33.338 Recommended Arb Burst: 6 00:16:33.338 IEEE OUI Identifier: e4 d2 5c 00:16:33.338 Multi-path I/O 00:16:33.338 May have multiple subsystem ports: Yes 00:16:33.338 May have multiple controllers: Yes 00:16:33.338 Associated with SR-IOV VF: No 00:16:33.338 Max Data Transfer Size: 131072 00:16:33.338 Max Number of Namespaces: 32 00:16:33.338 Max Number of I/O Queues: 127 00:16:33.338 NVMe Specification Version (VS): 1.3 00:16:33.338 NVMe Specification Version (Identify): 1.3 00:16:33.338 Maximum Queue Entries: 128 00:16:33.338 Contiguous Queues Required: Yes 00:16:33.338 Arbitration Mechanisms Supported 00:16:33.338 Weighted Round Robin: Not Supported 00:16:33.338 Vendor Specific: Not Supported 00:16:33.338 Reset Timeout: 15000 ms 00:16:33.338 Doorbell Stride: 4 bytes 00:16:33.338 NVM Subsystem Reset: Not Supported 00:16:33.338 Command Sets Supported 00:16:33.338 NVM Command Set: Supported 00:16:33.338 Boot Partition: Not Supported 00:16:33.338 Memory Page Size Minimum: 4096 bytes 00:16:33.338 Memory Page Size Maximum: 4096 bytes 00:16:33.338 Persistent Memory Region: Not Supported 00:16:33.338 Optional Asynchronous Events Supported 00:16:33.338 Namespace Attribute Notices: Supported 00:16:33.338 Firmware Activation Notices: Not Supported 00:16:33.338 ANA Change Notices: Not Supported 00:16:33.338 PLE Aggregate Log Change Notices: Not Supported 00:16:33.338 LBA Status Info Alert Notices: Not Supported 00:16:33.338 EGE Aggregate Log Change Notices: Not Supported 00:16:33.338 Normal NVM Subsystem Shutdown event: Not Supported 00:16:33.338 Zone Descriptor Change Notices: Not Supported 00:16:33.338 Discovery Log Change Notices: Not Supported 00:16:33.338 Controller Attributes 00:16:33.338 128-bit Host Identifier: Supported 00:16:33.338 Non-Operational Permissive Mode: Not Supported 00:16:33.338 NVM Sets: Not Supported 00:16:33.338 Read Recovery Levels: Not Supported 00:16:33.338 Endurance Groups: Not Supported 00:16:33.338 Predictable Latency Mode: Not Supported 00:16:33.338 Traffic Based Keep ALive: Not Supported 00:16:33.338 Namespace Granularity: Not Supported 00:16:33.338 SQ Associations: Not Supported 00:16:33.338 UUID List: Not Supported 00:16:33.338 Multi-Domain Subsystem: Not Supported 00:16:33.338 Fixed Capacity Management: Not Supported 00:16:33.338 Variable Capacity Management: Not Supported 00:16:33.338 Delete Endurance Group: Not Supported 00:16:33.338 Delete NVM Set: Not Supported 00:16:33.338 Extended LBA Formats Supported: Not Supported 00:16:33.338 Flexible Data Placement Supported: Not Supported 00:16:33.338 00:16:33.338 Controller Memory Buffer Support 00:16:33.338 ================================ 00:16:33.338 Supported: No 00:16:33.338 00:16:33.338 Persistent Memory Region Support 00:16:33.338 ================================ 00:16:33.338 Supported: No 00:16:33.338 00:16:33.338 Admin Command Set Attributes 00:16:33.338 ============================ 00:16:33.338 Security Send/Receive: Not Supported 00:16:33.338 Format NVM: Not Supported 00:16:33.338 Firmware Activate/Download: Not Supported 00:16:33.338 Namespace Management: Not Supported 00:16:33.338 Device Self-Test: Not Supported 00:16:33.338 Directives: Not Supported 00:16:33.338 NVMe-MI: Not Supported 00:16:33.338 Virtualization Management: Not Supported 00:16:33.338 Doorbell Buffer Config: Not Supported 00:16:33.338 Get LBA Status Capability: Not Supported 00:16:33.338 Command & Feature Lockdown Capability: Not Supported 00:16:33.338 Abort Command Limit: 4 00:16:33.338 Async Event Request Limit: 4 00:16:33.338 Number of Firmware Slots: N/A 00:16:33.338 Firmware Slot 1 Read-Only: N/A 00:16:33.338 Firmware Activation Without Reset: [2024-07-25 12:23:48.034576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.338 [2024-07-25 12:23:48.034583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.338 [2024-07-25 12:23:48.034587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeee40) on tqpair=0x1eaba60 00:16:33.338 [2024-07-25 12:23:48.034605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.338 [2024-07-25 12:23:48.034611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.338 [2024-07-25 12:23:48.034615] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034619] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eef140) on tqpair=0x1eaba60 00:16:33.338 [2024-07-25 12:23:48.034626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.338 [2024-07-25 12:23:48.034632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.338 [2024-07-25 12:23:48.034636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.338 [2024-07-25 12:23:48.034640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eef2c0) on tqpair=0x1eaba60 00:16:33.338 N/A 00:16:33.338 Multiple Update Detection Support: N/A 00:16:33.338 Firmware Update Granularity: No Information Provided 00:16:33.338 Per-Namespace SMART Log: No 00:16:33.338 Asymmetric Namespace Access Log Page: Not Supported 00:16:33.338 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:33.338 Command Effects Log Page: Supported 00:16:33.338 Get Log Page Extended Data: Supported 00:16:33.339 Telemetry Log Pages: Not Supported 00:16:33.339 Persistent Event Log Pages: Not Supported 00:16:33.339 Supported Log Pages Log Page: May Support 00:16:33.339 Commands Supported & Effects Log Page: Not Supported 00:16:33.339 Feature Identifiers & Effects Log Page:May Support 00:16:33.339 NVMe-MI Commands & Effects Log Page: May Support 00:16:33.339 Data Area 4 for Telemetry Log: Not Supported 00:16:33.339 Error Log Page Entries Supported: 128 00:16:33.339 Keep Alive: Supported 00:16:33.339 Keep Alive Granularity: 10000 ms 00:16:33.339 00:16:33.339 NVM Command Set Attributes 00:16:33.339 ========================== 00:16:33.339 Submission Queue Entry Size 00:16:33.339 Max: 64 00:16:33.339 Min: 64 00:16:33.339 Completion Queue Entry Size 00:16:33.339 Max: 16 00:16:33.339 Min: 16 00:16:33.339 Number of Namespaces: 32 00:16:33.339 Compare Command: Supported 00:16:33.339 Write Uncorrectable Command: Not Supported 00:16:33.339 Dataset Management Command: Supported 00:16:33.339 Write Zeroes Command: Supported 00:16:33.339 Set Features Save Field: Not Supported 00:16:33.339 Reservations: Supported 00:16:33.339 Timestamp: Not Supported 00:16:33.339 Copy: Supported 00:16:33.339 Volatile Write Cache: Present 00:16:33.339 Atomic Write Unit (Normal): 1 00:16:33.339 Atomic Write Unit (PFail): 1 00:16:33.339 Atomic Compare & Write Unit: 1 00:16:33.339 Fused Compare & Write: Supported 00:16:33.339 Scatter-Gather List 00:16:33.339 SGL Command Set: Supported 00:16:33.339 SGL Keyed: Supported 00:16:33.339 SGL Bit Bucket Descriptor: Not Supported 00:16:33.339 SGL Metadata Pointer: Not Supported 00:16:33.339 Oversized SGL: Not Supported 00:16:33.339 SGL Metadata Address: Not Supported 00:16:33.339 SGL Offset: Supported 00:16:33.339 Transport SGL Data Block: Not Supported 00:16:33.339 Replay Protected Memory Block: Not Supported 00:16:33.339 00:16:33.339 Firmware Slot Information 00:16:33.339 ========================= 00:16:33.339 Active slot: 1 00:16:33.339 Slot 1 Firmware Revision: 24.09 00:16:33.339 00:16:33.339 00:16:33.339 Commands Supported and Effects 00:16:33.339 ============================== 00:16:33.339 Admin Commands 00:16:33.339 -------------- 00:16:33.339 Get Log Page (02h): Supported 00:16:33.339 Identify (06h): Supported 00:16:33.339 Abort (08h): Supported 00:16:33.339 Set Features (09h): Supported 00:16:33.339 Get Features (0Ah): Supported 00:16:33.339 Asynchronous Event Request (0Ch): Supported 00:16:33.339 Keep Alive (18h): Supported 00:16:33.339 I/O Commands 00:16:33.339 ------------ 00:16:33.339 Flush (00h): Supported LBA-Change 00:16:33.339 Write (01h): Supported LBA-Change 00:16:33.339 Read (02h): Supported 00:16:33.339 Compare (05h): Supported 00:16:33.339 Write Zeroes (08h): Supported LBA-Change 00:16:33.339 Dataset Management (09h): Supported LBA-Change 00:16:33.339 Copy (19h): Supported LBA-Change 00:16:33.339 00:16:33.339 Error Log 00:16:33.339 ========= 00:16:33.339 00:16:33.339 Arbitration 00:16:33.339 =========== 00:16:33.339 Arbitration Burst: 1 00:16:33.339 00:16:33.339 Power Management 00:16:33.339 ================ 00:16:33.339 Number of Power States: 1 00:16:33.339 Current Power State: Power State #0 00:16:33.339 Power State #0: 00:16:33.339 Max Power: 0.00 W 00:16:33.339 Non-Operational State: Operational 00:16:33.339 Entry Latency: Not Reported 00:16:33.339 Exit Latency: Not Reported 00:16:33.339 Relative Read Throughput: 0 00:16:33.339 Relative Read Latency: 0 00:16:33.339 Relative Write Throughput: 0 00:16:33.339 Relative Write Latency: 0 00:16:33.339 Idle Power: Not Reported 00:16:33.339 Active Power: Not Reported 00:16:33.339 Non-Operational Permissive Mode: Not Supported 00:16:33.339 00:16:33.339 Health Information 00:16:33.339 ================== 00:16:33.339 Critical Warnings: 00:16:33.339 Available Spare Space: OK 00:16:33.339 Temperature: OK 00:16:33.339 Device Reliability: OK 00:16:33.339 Read Only: No 00:16:33.339 Volatile Memory Backup: OK 00:16:33.339 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:33.339 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:33.339 Available Spare: 0% 00:16:33.339 Available Spare Threshold: 0% 00:16:33.339 Life Percentage Used:[2024-07-25 12:23:48.034762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.339 [2024-07-25 12:23:48.034769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1eaba60) 00:16:33.339 [2024-07-25 12:23:48.034778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.339 [2024-07-25 12:23:48.034814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eef2c0, cid 7, qid 0 00:16:33.339 [2024-07-25 12:23:48.034911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.339 [2024-07-25 12:23:48.034918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.339 [2024-07-25 12:23:48.034922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.339 [2024-07-25 12:23:48.034927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eef2c0) on tqpair=0x1eaba60 00:16:33.339 [2024-07-25 12:23:48.034968] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:33.339 [2024-07-25 12:23:48.034980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee840) on tqpair=0x1eaba60 00:16:33.339 [2024-07-25 12:23:48.034987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.339 [2024-07-25 12:23:48.034993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eee9c0) on tqpair=0x1eaba60 00:16:33.339 [2024-07-25 12:23:48.034998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.339 [2024-07-25 12:23:48.035004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeeb40) on tqpair=0x1eaba60 00:16:33.339 [2024-07-25 12:23:48.035009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.339 [2024-07-25 12:23:48.035014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.339 [2024-07-25 12:23:48.035019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.339 [2024-07-25 12:23:48.035029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.339 [2024-07-25 12:23:48.035034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.339 [2024-07-25 12:23:48.035038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.339 [2024-07-25 12:23:48.035045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.339 [2024-07-25 12:23:48.035071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.339 [2024-07-25 12:23:48.035132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.339 [2024-07-25 12:23:48.035139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.339 [2024-07-25 12:23:48.035143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.339 [2024-07-25 12:23:48.035147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.339 [2024-07-25 12:23:48.035156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.339 [2024-07-25 12:23:48.035160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.339 [2024-07-25 12:23:48.035164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.035172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.035197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.035305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.035314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.035318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.035328] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:33.340 [2024-07-25 12:23:48.035333] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:33.340 [2024-07-25 12:23:48.035345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.035362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.035386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.035487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.035494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.035498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.035514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.035531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.035552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.035617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.035625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.035628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035633] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.035644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.035660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.035680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.035749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.035756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.035760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.035776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035784] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.035792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.035812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.035905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.035912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.035916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.035932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.035940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.035948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.035969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.036043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.036050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.036054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.036069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.036085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.036105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.036199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.036206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.036210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.036226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.036254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.036273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.036339] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.036348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.036351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.036367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.036384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.036408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.036470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.036477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.036481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.036496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.036513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.036534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.036614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.036621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.036625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.036640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036648] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.340 [2024-07-25 12:23:48.036656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.340 [2024-07-25 12:23:48.036677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.340 [2024-07-25 12:23:48.036762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.340 [2024-07-25 12:23:48.036770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.340 [2024-07-25 12:23:48.036774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.340 [2024-07-25 12:23:48.036778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.340 [2024-07-25 12:23:48.036789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.036794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.036798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.036806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.036828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.036896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.036903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.341 [2024-07-25 12:23:48.036907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.036911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.341 [2024-07-25 12:23:48.036922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.036927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.036931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.036939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.036959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.037029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.037036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.341 [2024-07-25 12:23:48.037040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.341 [2024-07-25 12:23:48.037055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037064] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.037071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.037091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.037158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.037165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.341 [2024-07-25 12:23:48.037169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.341 [2024-07-25 12:23:48.037184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.037201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.037232] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.037333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.037342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.341 [2024-07-25 12:23:48.037347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.341 [2024-07-25 12:23:48.037362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.037379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.037402] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.037472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.037479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.341 [2024-07-25 12:23:48.037483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.341 [2024-07-25 12:23:48.037498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.037515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.037535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.037622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.037629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.341 [2024-07-25 12:23:48.037633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.341 [2024-07-25 12:23:48.037648] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.037664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.037684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.037766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.037773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.341 [2024-07-25 12:23:48.037777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.341 [2024-07-25 12:23:48.037792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.037808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.037828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.037930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.037937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.341 [2024-07-25 12:23:48.037941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.341 [2024-07-25 12:23:48.037956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.037965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.037972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.037992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.038054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.038062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.341 [2024-07-25 12:23:48.038065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.038070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.341 [2024-07-25 12:23:48.038081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.038085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.038089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.038097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.038116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.038218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.038225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.341 [2024-07-25 12:23:48.038229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.038233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.341 [2024-07-25 12:23:48.038244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.038249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.341 [2024-07-25 12:23:48.038253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.341 [2024-07-25 12:23:48.038270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.341 [2024-07-25 12:23:48.038290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.341 [2024-07-25 12:23:48.042333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.341 [2024-07-25 12:23:48.042344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.342 [2024-07-25 12:23:48.042348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.342 [2024-07-25 12:23:48.042353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.342 [2024-07-25 12:23:48.042380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:33.342 [2024-07-25 12:23:48.042386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:33.342 [2024-07-25 12:23:48.042399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eaba60) 00:16:33.342 [2024-07-25 12:23:48.042409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.342 [2024-07-25 12:23:48.042437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeecc0, cid 3, qid 0 00:16:33.342 [2024-07-25 12:23:48.042499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:33.342 [2024-07-25 12:23:48.042506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:33.342 [2024-07-25 12:23:48.042510] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:33.342 [2024-07-25 12:23:48.042514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eeecc0) on tqpair=0x1eaba60 00:16:33.342 [2024-07-25 12:23:48.042523] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:16:33.342 0% 00:16:33.342 Data Units Read: 0 00:16:33.342 Data Units Written: 0 00:16:33.342 Host Read Commands: 0 00:16:33.342 Host Write Commands: 0 00:16:33.342 Controller Busy Time: 0 minutes 00:16:33.342 Power Cycles: 0 00:16:33.342 Power On Hours: 0 hours 00:16:33.342 Unsafe Shutdowns: 0 00:16:33.342 Unrecoverable Media Errors: 0 00:16:33.342 Lifetime Error Log Entries: 0 00:16:33.342 Warning Temperature Time: 0 minutes 00:16:33.342 Critical Temperature Time: 0 minutes 00:16:33.342 00:16:33.342 Number of Queues 00:16:33.342 ================ 00:16:33.342 Number of I/O Submission Queues: 127 00:16:33.342 Number of I/O Completion Queues: 127 00:16:33.342 00:16:33.342 Active Namespaces 00:16:33.342 ================= 00:16:33.342 Namespace ID:1 00:16:33.342 Error Recovery Timeout: Unlimited 00:16:33.342 Command Set Identifier: NVM (00h) 00:16:33.342 Deallocate: Supported 00:16:33.342 Deallocated/Unwritten Error: Not Supported 00:16:33.342 Deallocated Read Value: Unknown 00:16:33.342 Deallocate in Write Zeroes: Not Supported 00:16:33.342 Deallocated Guard Field: 0xFFFF 00:16:33.342 Flush: Supported 00:16:33.342 Reservation: Supported 00:16:33.342 Namespace Sharing Capabilities: Multiple Controllers 00:16:33.342 Size (in LBAs): 131072 (0GiB) 00:16:33.342 Capacity (in LBAs): 131072 (0GiB) 00:16:33.342 Utilization (in LBAs): 131072 (0GiB) 00:16:33.342 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:33.342 EUI64: ABCDEF0123456789 00:16:33.342 UUID: d2820f9b-9b44-4e75-b0a3-134220162b0d 00:16:33.342 Thin Provisioning: Not Supported 00:16:33.342 Per-NS Atomic Units: Yes 00:16:33.342 Atomic Boundary Size (Normal): 0 00:16:33.342 Atomic Boundary Size (PFail): 0 00:16:33.342 Atomic Boundary Offset: 0 00:16:33.342 Maximum Single Source Range Length: 65535 00:16:33.342 Maximum Copy Length: 65535 00:16:33.342 Maximum Source Range Count: 1 00:16:33.342 NGUID/EUI64 Never Reused: No 00:16:33.342 Namespace Write Protected: No 00:16:33.342 Number of LBA Formats: 1 00:16:33.342 Current LBA Format: LBA Format #00 00:16:33.342 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:33.342 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.342 rmmod nvme_tcp 00:16:33.342 rmmod nvme_fabrics 00:16:33.342 rmmod nvme_keyring 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86146 ']' 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86146 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 86146 ']' 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 86146 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.342 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86146 00:16:33.600 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.600 killing process with pid 86146 00:16:33.600 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.600 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86146' 00:16:33.600 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 86146 00:16:33.600 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 86146 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:33.859 00:16:33.859 real 0m2.680s 00:16:33.859 user 0m7.447s 00:16:33.859 sys 0m0.693s 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:33.859 ************************************ 00:16:33.859 END TEST nvmf_identify 00:16:33.859 ************************************ 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.859 ************************************ 00:16:33.859 START TEST nvmf_perf 00:16:33.859 ************************************ 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:33.859 * Looking for test storage... 00:16:33.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:33.859 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:33.860 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:33.860 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:33.860 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:33.860 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.860 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.860 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:33.860 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:34.125 Cannot find device "nvmf_tgt_br" 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.125 Cannot find device "nvmf_tgt_br2" 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:34.125 Cannot find device "nvmf_tgt_br" 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:34.125 Cannot find device "nvmf_tgt_br2" 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:34.125 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:34.395 12:23:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:34.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:16:34.395 00:16:34.395 --- 10.0.0.2 ping statistics --- 00:16:34.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.395 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:34.395 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:34.395 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:16:34.395 00:16:34.395 --- 10.0.0.3 ping statistics --- 00:16:34.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.395 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:34.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:34.395 00:16:34.395 --- 10.0.0.1 ping statistics --- 00:16:34.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.395 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86366 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86366 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 86366 ']' 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.395 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.396 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.396 12:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:34.396 [2024-07-25 12:23:49.136329] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:34.396 [2024-07-25 12:23:49.136431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.653 [2024-07-25 12:23:49.279759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.653 [2024-07-25 12:23:49.398603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.653 [2024-07-25 12:23:49.398656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.653 [2024-07-25 12:23:49.398667] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.653 [2024-07-25 12:23:49.398676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.653 [2024-07-25 12:23:49.398686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.653 [2024-07-25 12:23:49.398854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.653 [2024-07-25 12:23:49.399363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.653 [2024-07-25 12:23:49.399530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.653 [2024-07-25 12:23:49.399768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.588 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.588 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:16:35.588 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:35.588 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:35.588 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:35.588 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.588 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:35.588 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:35.847 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:35.847 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:36.105 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:36.105 12:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.364 12:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:36.364 12:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:36.364 12:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:36.364 12:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:36.364 12:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:36.623 [2024-07-25 12:23:51.429886] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.623 12:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:36.881 12:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:36.881 12:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.139 12:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:37.139 12:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:37.398 12:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.656 [2024-07-25 12:23:52.443840] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.656 12:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:37.914 12:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:37.914 12:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:37.914 12:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:37.914 12:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:39.289 Initializing NVMe Controllers 00:16:39.289 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:39.289 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:39.289 Initialization complete. Launching workers. 00:16:39.289 ======================================================== 00:16:39.289 Latency(us) 00:16:39.289 Device Information : IOPS MiB/s Average min max 00:16:39.289 PCIE (0000:00:10.0) NSID 1 from core 0: 21465.49 83.85 1491.09 293.65 8111.09 00:16:39.289 ======================================================== 00:16:39.289 Total : 21465.49 83.85 1491.09 293.65 8111.09 00:16:39.289 00:16:39.289 12:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:40.223 Initializing NVMe Controllers 00:16:40.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:40.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:40.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:40.223 Initialization complete. Launching workers. 00:16:40.223 ======================================================== 00:16:40.224 Latency(us) 00:16:40.224 Device Information : IOPS MiB/s Average min max 00:16:40.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2820.61 11.02 354.15 118.59 7358.95 00:16:40.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.77 0.48 8145.45 6869.82 15085.78 00:16:40.224 ======================================================== 00:16:40.224 Total : 2943.37 11.50 679.12 118.59 15085.78 00:16:40.224 00:16:40.483 12:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:41.858 Initializing NVMe Controllers 00:16:41.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:41.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:41.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:41.858 Initialization complete. Launching workers. 00:16:41.858 ======================================================== 00:16:41.858 Latency(us) 00:16:41.858 Device Information : IOPS MiB/s Average min max 00:16:41.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8439.23 32.97 3792.56 623.52 10499.65 00:16:41.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2670.91 10.43 12023.14 5880.13 20478.38 00:16:41.858 ======================================================== 00:16:41.858 Total : 11110.13 43.40 5771.22 623.52 20478.38 00:16:41.858 00:16:41.858 12:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:41.858 12:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:44.442 Initializing NVMe Controllers 00:16:44.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:44.442 Controller IO queue size 128, less than required. 00:16:44.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:44.442 Controller IO queue size 128, less than required. 00:16:44.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:44.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:44.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:44.442 Initialization complete. Launching workers. 00:16:44.442 ======================================================== 00:16:44.442 Latency(us) 00:16:44.442 Device Information : IOPS MiB/s Average min max 00:16:44.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1454.12 363.53 88965.00 54615.23 192547.67 00:16:44.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 556.09 139.02 236224.21 97465.81 345813.63 00:16:44.443 ======================================================== 00:16:44.443 Total : 2010.21 502.55 129701.72 54615.23 345813.63 00:16:44.443 00:16:44.443 12:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:16:44.443 Initializing NVMe Controllers 00:16:44.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:44.443 Controller IO queue size 128, less than required. 00:16:44.443 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:44.443 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:44.443 Controller IO queue size 128, less than required. 00:16:44.443 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:44.443 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:44.443 WARNING: Some requested NVMe devices were skipped 00:16:44.443 No valid NVMe controllers or AIO or URING devices found 00:16:44.701 12:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:16:47.229 Initializing NVMe Controllers 00:16:47.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:47.229 Controller IO queue size 128, less than required. 00:16:47.229 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:47.229 Controller IO queue size 128, less than required. 00:16:47.229 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:47.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:47.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:47.229 Initialization complete. Launching workers. 00:16:47.229 00:16:47.229 ==================== 00:16:47.229 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:47.229 TCP transport: 00:16:47.229 polls: 7875 00:16:47.229 idle_polls: 4463 00:16:47.229 sock_completions: 3412 00:16:47.229 nvme_completions: 4227 00:16:47.229 submitted_requests: 6312 00:16:47.229 queued_requests: 1 00:16:47.229 00:16:47.229 ==================== 00:16:47.229 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:47.229 TCP transport: 00:16:47.229 polls: 10411 00:16:47.229 idle_polls: 7372 00:16:47.229 sock_completions: 3039 00:16:47.229 nvme_completions: 6103 00:16:47.229 submitted_requests: 9188 00:16:47.229 queued_requests: 1 00:16:47.229 ======================================================== 00:16:47.229 Latency(us) 00:16:47.229 Device Information : IOPS MiB/s Average min max 00:16:47.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1054.77 263.69 124906.79 81350.01 233593.46 00:16:47.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1523.00 380.75 83969.26 40585.97 128195.17 00:16:47.229 ======================================================== 00:16:47.229 Total : 2577.76 644.44 100720.04 40585.97 233593.46 00:16:47.229 00:16:47.229 12:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:47.229 12:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.487 rmmod nvme_tcp 00:16:47.487 rmmod nvme_fabrics 00:16:47.487 rmmod nvme_keyring 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86366 ']' 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86366 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 86366 ']' 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 86366 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86366 00:16:47.487 killing process with pid 86366 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86366' 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 86366 00:16:47.487 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 86366 00:16:48.424 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.424 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.425 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.425 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.425 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.425 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.425 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.425 12:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:48.425 ************************************ 00:16:48.425 END TEST nvmf_perf 00:16:48.425 ************************************ 00:16:48.425 00:16:48.425 real 0m14.410s 00:16:48.425 user 0m52.825s 00:16:48.425 sys 0m3.661s 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.425 ************************************ 00:16:48.425 START TEST nvmf_fio_host 00:16:48.425 ************************************ 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:48.425 * Looking for test storage... 00:16:48.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:48.425 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:48.426 Cannot find device "nvmf_tgt_br" 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.426 Cannot find device "nvmf_tgt_br2" 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:48.426 Cannot find device "nvmf_tgt_br" 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:48.426 Cannot find device "nvmf_tgt_br2" 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:16:48.426 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:48.690 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:48.690 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.690 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.690 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:48.690 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:48.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:16:48.691 00:16:48.691 --- 10.0.0.2 ping statistics --- 00:16:48.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.691 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:48.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:48.691 00:16:48.691 --- 10.0.0.3 ping statistics --- 00:16:48.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.691 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:16:48.691 00:16:48.691 --- 10.0.0.1 ping statistics --- 00:16:48.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.691 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.691 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=86846 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 86846 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 86846 ']' 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.950 12:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.950 [2024-07-25 12:24:03.636629] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:48.950 [2024-07-25 12:24:03.636805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.950 [2024-07-25 12:24:03.778734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:49.208 [2024-07-25 12:24:03.947870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.208 [2024-07-25 12:24:03.948204] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.208 [2024-07-25 12:24:03.948579] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.208 [2024-07-25 12:24:03.948847] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.208 [2024-07-25 12:24:03.948971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.208 [2024-07-25 12:24:03.949280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.208 [2024-07-25 12:24:03.949438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.208 [2024-07-25 12:24:03.949517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.208 [2024-07-25 12:24:03.949518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.776 12:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:49.776 12:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:16:49.776 12:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:50.034 [2024-07-25 12:24:04.872561] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.292 12:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:50.292 12:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:50.292 12:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.292 12:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:50.550 Malloc1 00:16:50.550 12:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:50.808 12:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:51.066 12:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.323 [2024-07-25 12:24:06.033694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.323 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:51.581 12:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:51.838 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:51.838 fio-3.35 00:16:51.838 Starting 1 thread 00:16:54.364 00:16:54.364 test: (groupid=0, jobs=1): err= 0: pid=86977: Thu Jul 25 12:24:08 2024 00:16:54.364 read: IOPS=8720, BW=34.1MiB/s (35.7MB/s)(68.4MiB/2007msec) 00:16:54.364 slat (nsec): min=1862, max=339635, avg=2641.77, stdev=3262.02 00:16:54.364 clat (usec): min=3110, max=13743, avg=7672.94, stdev=668.35 00:16:54.364 lat (usec): min=3152, max=13745, avg=7675.59, stdev=668.15 00:16:54.364 clat percentiles (usec): 00:16:54.364 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:16:54.364 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7570], 60.00th=[ 7701], 00:16:54.364 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 8717], 00:16:54.364 | 99.00th=[10028], 99.50th=[10552], 99.90th=[11994], 99.95th=[13042], 00:16:54.364 | 99.99th=[13304] 00:16:54.364 bw ( KiB/s): min=33720, max=35704, per=100.00%, avg=34884.00, stdev=838.27, samples=4 00:16:54.364 iops : min= 8430, max= 8926, avg=8721.00, stdev=209.57, samples=4 00:16:54.364 write: IOPS=8717, BW=34.1MiB/s (35.7MB/s)(68.3MiB/2007msec); 0 zone resets 00:16:54.364 slat (nsec): min=1918, max=241263, avg=2742.59, stdev=2171.72 00:16:54.364 clat (usec): min=2286, max=13692, avg=6944.15, stdev=599.01 00:16:54.364 lat (usec): min=2299, max=13694, avg=6946.89, stdev=598.84 00:16:54.364 clat percentiles (usec): 00:16:54.364 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6521], 00:16:54.364 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6980], 00:16:54.364 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7767], 00:16:54.364 | 99.00th=[ 9110], 99.50th=[ 9634], 99.90th=[11207], 99.95th=[12780], 00:16:54.364 | 99.99th=[13304] 00:16:54.364 bw ( KiB/s): min=34264, max=35424, per=99.97%, avg=34862.00, stdev=510.47, samples=4 00:16:54.364 iops : min= 8566, max= 8856, avg=8715.50, stdev=127.62, samples=4 00:16:54.364 lat (msec) : 4=0.11%, 10=99.26%, 20=0.63% 00:16:54.364 cpu : usr=66.55%, sys=24.28%, ctx=5, majf=0, minf=7 00:16:54.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:54.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.364 issued rwts: total=17503,17497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.364 00:16:54.364 Run status group 0 (all jobs): 00:16:54.364 READ: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=68.4MiB (71.7MB), run=2007-2007msec 00:16:54.364 WRITE: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=68.3MiB (71.7MB), run=2007-2007msec 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:54.364 12:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:54.364 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:54.364 fio-3.35 00:16:54.364 Starting 1 thread 00:16:56.891 00:16:56.891 test: (groupid=0, jobs=1): err= 0: pid=87026: Thu Jul 25 12:24:11 2024 00:16:56.891 read: IOPS=7935, BW=124MiB/s (130MB/s)(249MiB/2005msec) 00:16:56.891 slat (usec): min=3, max=120, avg= 3.83, stdev= 1.88 00:16:56.891 clat (usec): min=2711, max=25459, avg=9444.99, stdev=2462.48 00:16:56.891 lat (usec): min=2715, max=25464, avg=9448.82, stdev=2462.74 00:16:56.891 clat percentiles (usec): 00:16:56.891 | 1.00th=[ 5014], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7373], 00:16:56.891 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:16:56.891 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12256], 95.00th=[13698], 00:16:56.891 | 99.00th=[17433], 99.50th=[19006], 99.90th=[21627], 99.95th=[22152], 00:16:56.891 | 99.99th=[24773] 00:16:56.891 bw ( KiB/s): min=61280, max=74560, per=51.49%, avg=65368.00, stdev=6281.65, samples=4 00:16:56.891 iops : min= 3830, max= 4660, avg=4085.50, stdev=392.60, samples=4 00:16:56.891 write: IOPS=4649, BW=72.6MiB/s (76.2MB/s)(134MiB/1845msec); 0 zone resets 00:16:56.891 slat (usec): min=36, max=358, avg=38.84, stdev= 8.51 00:16:56.891 clat (usec): min=3424, max=21594, avg=11674.97, stdev=2167.71 00:16:56.891 lat (usec): min=3460, max=21631, avg=11713.81, stdev=2168.89 00:16:56.892 clat percentiles (usec): 00:16:56.892 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9896], 00:16:56.892 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:16:56.892 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14615], 95.00th=[15533], 00:16:56.892 | 99.00th=[19006], 99.50th=[20055], 99.90th=[21103], 99.95th=[21365], 00:16:56.892 | 99.99th=[21627] 00:16:56.892 bw ( KiB/s): min=63584, max=77824, per=91.70%, avg=68216.00, stdev=6620.08, samples=4 00:16:56.892 iops : min= 3974, max= 4864, avg=4263.50, stdev=413.75, samples=4 00:16:56.892 lat (msec) : 4=0.18%, 10=47.63%, 20=51.78%, 50=0.42% 00:16:56.892 cpu : usr=70.81%, sys=18.61%, ctx=4, majf=0, minf=12 00:16:56.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:56.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:56.892 issued rwts: total=15910,8578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:56.892 00:16:56.892 Run status group 0 (all jobs): 00:16:56.892 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2005-2005msec 00:16:56.892 WRITE: bw=72.6MiB/s (76.2MB/s), 72.6MiB/s-72.6MiB/s (76.2MB/s-76.2MB/s), io=134MiB (141MB), run=1845-1845msec 00:16:56.892 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.892 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:56.892 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:56.892 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:56.892 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:56.892 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:56.892 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.150 rmmod nvme_tcp 00:16:57.150 rmmod nvme_fabrics 00:16:57.150 rmmod nvme_keyring 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 86846 ']' 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 86846 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 86846 ']' 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 86846 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86846 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86846' 00:16:57.150 killing process with pid 86846 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 86846 00:16:57.150 12:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 86846 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:57.430 ************************************ 00:16:57.430 END TEST nvmf_fio_host 00:16:57.430 ************************************ 00:16:57.430 00:16:57.430 real 0m9.163s 00:16:57.430 user 0m36.924s 00:16:57.430 sys 0m2.455s 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.430 12:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.692 ************************************ 00:16:57.692 START TEST nvmf_failover 00:16:57.692 ************************************ 00:16:57.692 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:57.692 * Looking for test storage... 00:16:57.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:57.692 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:57.692 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:57.692 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.692 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.692 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.692 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.692 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.692 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.692 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:57.693 Cannot find device "nvmf_tgt_br" 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:57.693 Cannot find device "nvmf_tgt_br2" 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:57.693 Cannot find device "nvmf_tgt_br" 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:57.693 Cannot find device "nvmf_tgt_br2" 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:57.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:57.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:57.693 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:57.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:16:57.952 00:16:57.952 --- 10.0.0.2 ping statistics --- 00:16:57.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.952 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:57.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:57.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:16:57.952 00:16:57.952 --- 10.0.0.3 ping statistics --- 00:16:57.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.952 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:57.952 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:57.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:57.952 00:16:57.952 --- 10.0.0.1 ping statistics --- 00:16:57.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.953 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87240 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87240 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87240 ']' 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.953 12:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:57.953 [2024-07-25 12:24:12.814489] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:16:57.953 [2024-07-25 12:24:12.814609] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.211 [2024-07-25 12:24:12.959453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:58.469 [2024-07-25 12:24:13.077785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.469 [2024-07-25 12:24:13.077855] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.469 [2024-07-25 12:24:13.077867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.469 [2024-07-25 12:24:13.077875] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.469 [2024-07-25 12:24:13.077882] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.469 [2024-07-25 12:24:13.078031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.469 [2024-07-25 12:24:13.078199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.469 [2024-07-25 12:24:13.078202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.033 12:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.033 12:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:16:59.033 12:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.033 12:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:59.033 12:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:59.033 12:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.033 12:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:59.291 [2024-07-25 12:24:14.120120] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.291 12:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:59.549 Malloc0 00:16:59.807 12:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:59.807 12:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:00.373 12:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.373 [2024-07-25 12:24:15.194776] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.373 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:00.631 [2024-07-25 12:24:15.430974] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:00.631 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:00.936 [2024-07-25 12:24:15.715181] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:00.936 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87353 00:17:00.936 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:00.936 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87353 /var/tmp/bdevperf.sock 00:17:00.936 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87353 ']' 00:17:00.936 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:00.936 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.936 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.936 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.936 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.936 12:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:02.315 12:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.315 12:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:17:02.315 12:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:02.315 NVMe0n1 00:17:02.315 12:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:02.880 00:17:02.880 12:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87406 00:17:02.880 12:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:02.880 12:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:03.813 12:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.071 [2024-07-25 12:24:18.726901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9e50 is same with the state(5) to be set 00:17:04.071 [2024-07-25 12:24:18.726958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9e50 is same with the state(5) to be set 00:17:04.071 [2024-07-25 12:24:18.726970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9e50 is same with the state(5) to be set 00:17:04.071 [2024-07-25 12:24:18.726980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9e50 is same with the state(5) to be set 00:17:04.071 [2024-07-25 12:24:18.726988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9e50 is same with the state(5) to be set 00:17:04.071 [2024-07-25 12:24:18.726997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9e50 is same with the state(5) to be set 00:17:04.071 [2024-07-25 12:24:18.727005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b9e50 is same with the state(5) to be set 00:17:04.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:07.350 12:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:07.350 00:17:07.350 12:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:07.609 [2024-07-25 12:24:22.332386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.609 [2024-07-25 12:24:22.332855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.332996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 [2024-07-25 12:24:22.333485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15babd0 is same with the state(5) to be set 00:17:07.610 12:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:10.910 12:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.910 [2024-07-25 12:24:25.667112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.910 12:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:11.845 12:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:12.161 [2024-07-25 12:24:26.950550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.950999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.161 [2024-07-25 12:24:26.951183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 [2024-07-25 12:24:26.951653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773f70 is same with the state(5) to be set 00:17:12.162 12:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 87406 00:17:18.727 0 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 87353 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87353 ']' 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87353 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87353 00:17:18.727 killing process with pid 87353 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87353' 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87353 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87353 00:17:18.727 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:18.727 [2024-07-25 12:24:15.791086] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:17:18.727 [2024-07-25 12:24:15.791217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87353 ] 00:17:18.727 [2024-07-25 12:24:15.930191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.727 [2024-07-25 12:24:16.070239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.727 Running I/O for 15 seconds... 00:17:18.727 [2024-07-25 12:24:18.727602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.727 [2024-07-25 12:24:18.727646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.727 [2024-07-25 12:24:18.727689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.727 [2024-07-25 12:24:18.727720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.727 [2024-07-25 12:24:18.727748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.727 [2024-07-25 12:24:18.727777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.727 [2024-07-25 12:24:18.727806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.727 [2024-07-25 12:24:18.727834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.727 [2024-07-25 12:24:18.727862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.727 [2024-07-25 12:24:18.727890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.727 [2024-07-25 12:24:18.727919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.727 [2024-07-25 12:24:18.727948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.727 [2024-07-25 12:24:18.727993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.728 [2024-07-25 12:24:18.728489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.728983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.728998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.728 [2024-07-25 12:24:18.729378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.728 [2024-07-25 12:24:18.729393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.729407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.729435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.729464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.729 [2024-07-25 12:24:18.729493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.729 [2024-07-25 12:24:18.729521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.729 [2024-07-25 12:24:18.729550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.729 [2024-07-25 12:24:18.729578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.729 [2024-07-25 12:24:18.729615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.729 [2024-07-25 12:24:18.729645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.729 [2024-07-25 12:24:18.729718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.729750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.729807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.729838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.729883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.729923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.729962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.729989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.730011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.730052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.730100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.730147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.730205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.730254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.730315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.730375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.729 [2024-07-25 12:24:18.730424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.730503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81472 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.730525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.729 [2024-07-25 12:24:18.730582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.730602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81480 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.730622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.729 [2024-07-25 12:24:18.730658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.730677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81488 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.730698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.729 [2024-07-25 12:24:18.730733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.730756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81496 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.730779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.729 [2024-07-25 12:24:18.730814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.730830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81504 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.730854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.729 [2024-07-25 12:24:18.730903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.730919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81512 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.730943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.730964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.729 [2024-07-25 12:24:18.730978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.730993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81520 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.731016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.731039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.729 [2024-07-25 12:24:18.731052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.731068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81528 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.731089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.731112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.729 [2024-07-25 12:24:18.731128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.731143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81536 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.731164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.731187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.729 [2024-07-25 12:24:18.731208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.731224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81544 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.731243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.731266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.729 [2024-07-25 12:24:18.731283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.729 [2024-07-25 12:24:18.731314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81552 len:8 PRP1 0x0 PRP2 0x0 00:17:18.729 [2024-07-25 12:24:18.731336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.729 [2024-07-25 12:24:18.731371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.731387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.731403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81560 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.731422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.731444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.731461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.731478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81568 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.731508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.731530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.731547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.731563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81576 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.731583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.731604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.731621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.731638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81584 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.731658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.731677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.731693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.731712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81592 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.731741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.731761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.731777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.731796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81600 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.731817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.731838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.731859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.731878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81608 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.731899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.731920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.731935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.731952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81616 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.731985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81624 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81632 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81640 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81648 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81656 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81664 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81672 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81680 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81688 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81696 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81704 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.732926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.732942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81712 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.732961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.732984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.733001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.733016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81720 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.733047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.733086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.733103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.733119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81728 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.733140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.733164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.733184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.733205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81736 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.733228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.733251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.733266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.733281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81744 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.733315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.730 [2024-07-25 12:24:18.733340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.730 [2024-07-25 12:24:18.733365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.730 [2024-07-25 12:24:18.733382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81752 len:8 PRP1 0x0 PRP2 0x0 00:17:18.730 [2024-07-25 12:24:18.733416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.733438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.733453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.733470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81760 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.733492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.733514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.733528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.733544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81768 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.733566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.733589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.733603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.733619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81776 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.733639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.733662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.733677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.733693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81784 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.733713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.733735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.733763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.733778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81792 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.733798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.733820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.733842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.733863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80848 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.733883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.733906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.733923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.733939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.733959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.733980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.733998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.734028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80864 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.734049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.734069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.734086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.734105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.734126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.734146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.734160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.734176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.734194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.734208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.734219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.734238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.734261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.734282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.734310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.734328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.734360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.734380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.731 [2024-07-25 12:24:18.734394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.731 [2024-07-25 12:24:18.734410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:17:18.731 [2024-07-25 12:24:18.734427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.734497] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16cc8a0 was disconnected and freed. reset controller. 00:17:18.731 [2024-07-25 12:24:18.734533] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:18.731 [2024-07-25 12:24:18.734596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.731 [2024-07-25 12:24:18.734617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.734632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.731 [2024-07-25 12:24:18.734646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.734660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.731 [2024-07-25 12:24:18.734673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.734698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.731 [2024-07-25 12:24:18.734723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:18.734737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.731 [2024-07-25 12:24:18.734799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167be30 (9): Bad file descriptor 00:17:18.731 [2024-07-25 12:24:18.738715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.731 [2024-07-25 12:24:18.776112] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.731 [2024-07-25 12:24:22.336678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.731 [2024-07-25 12:24:22.336742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:22.336770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.731 [2024-07-25 12:24:22.336786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:22.336803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.731 [2024-07-25 12:24:22.336817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:22.336832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.731 [2024-07-25 12:24:22.336846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:22.336861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.731 [2024-07-25 12:24:22.336875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.731 [2024-07-25 12:24:22.336891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.732 [2024-07-25 12:24:22.336904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.336919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.336932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.336947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.336961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.336976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.336990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.337976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.337991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.338019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.338048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.338077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.338106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.338134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.338163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.338197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.338227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.338256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.732 [2024-07-25 12:24:22.338284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.732 [2024-07-25 12:24:22.338309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.733 [2024-07-25 12:24:22.338340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.733 [2024-07-25 12:24:22.338369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.733 [2024-07-25 12:24:22.338398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.733 [2024-07-25 12:24:22.338427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.733 [2024-07-25 12:24:22.338456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.733 [2024-07-25 12:24:22.338486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.733 [2024-07-25 12:24:22.338514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.733 [2024-07-25 12:24:22.338543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.733 [2024-07-25 12:24:22.338580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.338629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86336 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.338643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.338671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.338681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86344 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.338702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.338725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.338735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86352 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.338748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.338771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.338781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.338794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.338825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.338839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86368 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.338852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.338875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.338885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.338898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.338921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.338932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86384 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.338944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.338958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.338967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.338985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86392 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86400 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86408 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86440 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86448 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86456 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86464 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86472 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86480 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86488 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86496 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.733 [2024-07-25 12:24:22.339660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.733 [2024-07-25 12:24:22.339669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.733 [2024-07-25 12:24:22.339680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86504 len:8 PRP1 0x0 PRP2 0x0 00:17:18.733 [2024-07-25 12:24:22.339692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.339706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.339715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.339726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86512 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.339739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.339759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.339769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.339779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86520 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.339792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.339805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.339815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.339825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86528 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.339839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.339853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.339863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.339873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86536 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.339886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.339899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.339909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.339919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86544 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.339932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.339945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.339955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.339965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86552 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.339978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.339991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86560 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86568 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86576 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86584 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86592 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86600 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86608 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86616 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86624 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86632 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86640 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86648 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86656 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86664 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86672 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86680 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86688 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86696 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86704 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86712 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.340961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.734 [2024-07-25 12:24:22.340974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.734 [2024-07-25 12:24:22.340983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.734 [2024-07-25 12:24:22.340993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86720 len:8 PRP1 0x0 PRP2 0x0 00:17:18.734 [2024-07-25 12:24:22.341006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86728 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86736 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86744 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86752 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86760 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86768 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86776 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86784 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86792 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86800 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86808 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86816 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86824 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86832 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86840 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86848 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.735 [2024-07-25 12:24:22.341863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.735 [2024-07-25 12:24:22.341873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.735 [2024-07-25 12:24:22.341883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86856 len:8 PRP1 0x0 PRP2 0x0 00:17:18.735 [2024-07-25 12:24:22.341896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.736 [2024-07-25 12:24:22.341909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.736 [2024-07-25 12:24:22.341919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.736 [2024-07-25 12:24:22.341929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86864 len:8 PRP1 0x0 PRP2 0x0 00:17:18.736 [2024-07-25 12:24:22.341942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.736 [2024-07-25 12:24:22.341999] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16f1d90 was disconnected and freed. reset controller. 00:17:18.736 [2024-07-25 12:24:22.342016] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:18.736 [2024-07-25 12:24:22.342079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.737 [2024-07-25 12:24:22.342100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:22.342116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.737 [2024-07-25 12:24:22.342129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:22.342143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.737 [2024-07-25 12:24:22.342157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:22.342183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.737 [2024-07-25 12:24:22.342197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:22.342210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.737 [2024-07-25 12:24:22.342262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167be30 (9): Bad file descriptor 00:17:18.737 [2024-07-25 12:24:22.346070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.737 [2024-07-25 12:24:22.382883] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.737 [2024-07-25 12:24:26.953212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.953983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.953996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-07-25 12:24:26.954578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-07-25 12:24:26.954594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-07-25 12:24:26.954611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-07-25 12:24:26.954640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-07-25 12:24:26.954670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-07-25 12:24:26.954698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-07-25 12:24:26.954728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.954767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.954799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.954827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.954857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.954886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.954915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.954943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.954972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.954987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.955971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.955986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.956000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.956015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.956029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.956044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.956057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-07-25 12:24:26.956073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.738 [2024-07-25 12:24:26.956087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:18.739 [2024-07-25 12:24:26.956705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.956760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41904 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.956774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.956802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.956812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41912 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.956825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.956852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.956862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41920 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.956875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.956898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.956909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41928 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.956922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.956945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.956955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41936 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.956968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.956982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.956991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41944 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.957037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41952 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.957092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41960 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.957139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41968 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.957185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41976 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.957230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41984 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.957276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41992 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.957337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42000 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.957383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42008 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.957429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42016 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:18.739 [2024-07-25 12:24:26.957482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:18.739 [2024-07-25 12:24:26.957493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41376 len:8 PRP1 0x0 PRP2 0x0 00:17:18.739 [2024-07-25 12:24:26.957506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957563] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1708dc0 was disconnected and freed. reset controller. 00:17:18.739 [2024-07-25 12:24:26.957580] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:18.739 [2024-07-25 12:24:26.957636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.739 [2024-07-25 12:24:26.957657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.739 [2024-07-25 12:24:26.957685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.739 [2024-07-25 12:24:26.957713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-07-25 12:24:26.957726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.739 [2024-07-25 12:24:26.957739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-07-25 12:24:26.957752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.740 [2024-07-25 12:24:26.957804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167be30 (9): Bad file descriptor 00:17:18.740 [2024-07-25 12:24:26.961614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.740 [2024-07-25 12:24:27.002761] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.740 00:17:18.740 Latency(us) 00:17:18.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.740 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:18.740 Verification LBA range: start 0x0 length 0x4000 00:17:18.740 NVMe0n1 : 15.01 9030.85 35.28 235.45 0.00 13781.97 547.37 16562.73 00:17:18.740 =================================================================================================================== 00:17:18.740 Total : 9030.85 35.28 235.45 0.00 13781.97 547.37 16562.73 00:17:18.740 Received shutdown signal, test time was about 15.000000 seconds 00:17:18.740 00:17:18.740 Latency(us) 00:17:18.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.740 =================================================================================================================== 00:17:18.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:18.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87609 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87609 /var/tmp/bdevperf.sock 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87609 ']' 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.740 12:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:18.998 12:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.998 12:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:17:18.998 12:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:19.255 [2024-07-25 12:24:34.108137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:19.514 12:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:19.774 [2024-07-25 12:24:34.428404] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:19.774 12:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:20.031 NVMe0n1 00:17:20.031 12:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:20.290 00:17:20.290 12:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:20.858 00:17:20.858 12:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:20.858 12:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:21.116 12:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:21.374 12:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:24.659 12:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:24.659 12:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:24.659 12:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=87746 00:17:24.659 12:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:24.659 12:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 87746 00:17:26.032 0 00:17:26.032 12:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:26.032 [2024-07-25 12:24:32.923458] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:17:26.033 [2024-07-25 12:24:32.923568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87609 ] 00:17:26.033 [2024-07-25 12:24:33.054743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.033 [2024-07-25 12:24:33.169438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.033 [2024-07-25 12:24:36.041821] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:26.033 [2024-07-25 12:24:36.042008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.033 [2024-07-25 12:24:36.042050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.033 [2024-07-25 12:24:36.042082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.033 [2024-07-25 12:24:36.042105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.033 [2024-07-25 12:24:36.042127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.033 [2024-07-25 12:24:36.042149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.033 [2024-07-25 12:24:36.042179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.033 [2024-07-25 12:24:36.042201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.033 [2024-07-25 12:24:36.042223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:26.033 [2024-07-25 12:24:36.042285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:26.033 [2024-07-25 12:24:36.042349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ae30 (9): Bad file descriptor 00:17:26.033 [2024-07-25 12:24:36.047276] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:26.033 Running I/O for 1 seconds... 00:17:26.033 00:17:26.033 Latency(us) 00:17:26.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.033 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:26.033 Verification LBA range: start 0x0 length 0x4000 00:17:26.033 NVMe0n1 : 1.01 8855.51 34.59 0.00 0.00 14388.31 2204.39 15192.44 00:17:26.033 =================================================================================================================== 00:17:26.033 Total : 8855.51 34.59 0.00 0.00 14388.31 2204.39 15192.44 00:17:26.033 12:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:26.033 12:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:26.033 12:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:26.291 12:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:26.291 12:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:26.550 12:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:26.808 12:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 87609 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87609 ']' 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87609 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87609 00:17:30.132 killing process with pid 87609 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87609' 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87609 00:17:30.132 12:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87609 00:17:30.390 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:30.390 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.649 rmmod nvme_tcp 00:17:30.649 rmmod nvme_fabrics 00:17:30.649 rmmod nvme_keyring 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87240 ']' 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87240 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87240 ']' 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87240 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87240 00:17:30.649 killing process with pid 87240 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87240' 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87240 00:17:30.649 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87240 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:30.907 ************************************ 00:17:30.907 END TEST nvmf_failover 00:17:30.907 ************************************ 00:17:30.907 00:17:30.907 real 0m33.483s 00:17:30.907 user 2m10.860s 00:17:30.907 sys 0m4.675s 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.907 12:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.166 ************************************ 00:17:31.166 START TEST nvmf_host_discovery 00:17:31.166 ************************************ 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:31.166 * Looking for test storage... 00:17:31.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.166 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:31.167 Cannot find device "nvmf_tgt_br" 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:31.167 Cannot find device "nvmf_tgt_br2" 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:31.167 Cannot find device "nvmf_tgt_br" 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:31.167 Cannot find device "nvmf_tgt_br2" 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:17:31.167 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:31.167 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:31.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:31.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:31.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:17:31.425 00:17:31.425 --- 10.0.0.2 ping statistics --- 00:17:31.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.425 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:31.425 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:31.425 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:17:31.425 00:17:31.425 --- 10.0.0.3 ping statistics --- 00:17:31.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.425 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:31.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:31.425 00:17:31.425 --- 10.0.0.1 ping statistics --- 00:17:31.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.425 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.425 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88051 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88051 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 88051 ']' 00:17:31.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.426 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.684 [2024-07-25 12:24:46.305939] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:17:31.684 [2024-07-25 12:24:46.306039] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.684 [2024-07-25 12:24:46.444356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.942 [2024-07-25 12:24:46.573015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.942 [2024-07-25 12:24:46.573065] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.942 [2024-07-25 12:24:46.573079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.942 [2024-07-25 12:24:46.573087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.942 [2024-07-25 12:24:46.573095] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.942 [2024-07-25 12:24:46.573128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.519 [2024-07-25 12:24:47.366013] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.519 [2024-07-25 12:24:47.374108] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.519 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.778 null0 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.778 null1 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88101 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88101 /tmp/host.sock 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 88101 ']' 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:32.778 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.778 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.778 [2024-07-25 12:24:47.478429] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:17:32.778 [2024-07-25 12:24:47.478857] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88101 ] 00:17:32.778 [2024-07-25 12:24:47.625657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.036 [2024-07-25 12:24:47.750776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:33.602 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:33.860 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.119 [2024-07-25 12:24:48.811422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.119 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:34.377 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.377 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:34.377 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:34.377 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.377 12:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:17:34.377 12:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:17:34.635 [2024-07-25 12:24:49.459412] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:34.635 [2024-07-25 12:24:49.459450] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:34.635 [2024-07-25 12:24:49.459470] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:34.892 [2024-07-25 12:24:49.545531] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:34.892 [2024-07-25 12:24:49.602701] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:34.893 [2024-07-25 12:24:49.602737] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.459 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.460 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.718 [2024-07-25 12:24:50.359934] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:35.718 [2024-07-25 12:24:50.360939] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:35.718 [2024-07-25 12:24:50.360981] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.718 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.719 [2024-07-25 12:24:50.446444] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.719 [2024-07-25 12:24:50.512765] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:35.719 [2024-07-25 12:24:50.512799] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:35.719 [2024-07-25 12:24:50.512808] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:17:35.719 12:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:17:36.674 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:36.674 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:36.674 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:17:36.674 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:36.674 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.674 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.674 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:36.674 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:36.674 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.933 [2024-07-25 12:24:51.649584] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:36.933 [2024-07-25 12:24:51.649627] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:36.933 [2024-07-25 12:24:51.650075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.933 [2024-07-25 12:24:51.650112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.933 [2024-07-25 12:24:51.650128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.933 [2024-07-25 12:24:51.650139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.933 [2024-07-25 12:24:51.650149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.933 [2024-07-25 12:24:51.650159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.933 [2024-07-25 12:24:51.650170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.933 [2024-07-25 12:24:51.650180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.933 [2024-07-25 12:24:51.650190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cfc50 is same with the state(5) to be set 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:36.933 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:36.934 [2024-07-25 12:24:51.660022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cfc50 (9): Bad file descriptor 00:17:36.934 [2024-07-25 12:24:51.670046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:36.934 [2024-07-25 12:24:51.670195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.934 [2024-07-25 12:24:51.670222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cfc50 with addr=10.0.0.2, port=4420 00:17:36.934 [2024-07-25 12:24:51.670235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cfc50 is same with the state(5) to be set 00:17:36.934 [2024-07-25 12:24:51.670255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cfc50 (9): Bad file descriptor 00:17:36.934 [2024-07-25 12:24:51.670288] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:36.934 [2024-07-25 12:24:51.670316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:36.934 [2024-07-25 12:24:51.670329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:36.934 [2024-07-25 12:24:51.670347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.934 [2024-07-25 12:24:51.680119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:36.934 [2024-07-25 12:24:51.680221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.934 [2024-07-25 12:24:51.680244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cfc50 with addr=10.0.0.2, port=4420 00:17:36.934 [2024-07-25 12:24:51.680256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cfc50 is same with the state(5) to be set 00:17:36.934 [2024-07-25 12:24:51.680275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cfc50 (9): Bad file descriptor 00:17:36.934 [2024-07-25 12:24:51.680291] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:36.934 [2024-07-25 12:24:51.680315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:36.934 [2024-07-25 12:24:51.680326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:36.934 [2024-07-25 12:24:51.680342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.934 [2024-07-25 12:24:51.690180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:36.934 [2024-07-25 12:24:51.690269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.934 [2024-07-25 12:24:51.690292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cfc50 with addr=10.0.0.2, port=4420 00:17:36.934 [2024-07-25 12:24:51.690328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cfc50 is same with the state(5) to be set 00:17:36.934 [2024-07-25 12:24:51.690347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cfc50 (9): Bad file descriptor 00:17:36.934 [2024-07-25 12:24:51.690390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:36.934 [2024-07-25 12:24:51.690403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:36.934 [2024-07-25 12:24:51.690413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:36.934 [2024-07-25 12:24:51.690428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.934 [2024-07-25 12:24:51.700236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:36.934 [2024-07-25 12:24:51.700350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.934 [2024-07-25 12:24:51.700374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cfc50 with addr=10.0.0.2, port=4420 00:17:36.934 [2024-07-25 12:24:51.700385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cfc50 is same with the state(5) to be set 00:17:36.934 [2024-07-25 12:24:51.700403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cfc50 (9): Bad file descriptor 00:17:36.934 [2024-07-25 12:24:51.700420] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:36.934 [2024-07-25 12:24:51.700429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:36.934 [2024-07-25 12:24:51.700439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:36.934 [2024-07-25 12:24:51.700455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.934 [2024-07-25 12:24:51.710303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:36.934 [2024-07-25 12:24:51.710388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.934 [2024-07-25 12:24:51.710410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cfc50 with addr=10.0.0.2, port=4420 00:17:36.934 [2024-07-25 12:24:51.710421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cfc50 is same with the state(5) to be set 00:17:36.934 [2024-07-25 12:24:51.710437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cfc50 (9): Bad file descriptor 00:17:36.934 [2024-07-25 12:24:51.710463] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:36.934 [2024-07-25 12:24:51.710475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:36.934 [2024-07-25 12:24:51.710485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:36.934 [2024-07-25 12:24:51.710500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:36.934 [2024-07-25 12:24:51.720360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:36.934 [2024-07-25 12:24:51.720472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.934 [2024-07-25 12:24:51.720497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cfc50 with addr=10.0.0.2, port=4420 00:17:36.934 [2024-07-25 12:24:51.720510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cfc50 is same with the state(5) to be set 00:17:36.934 [2024-07-25 12:24:51.720528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cfc50 (9): Bad file descriptor 00:17:36.934 [2024-07-25 12:24:51.720544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:36.934 [2024-07-25 12:24:51.720554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:36.934 [2024-07-25 12:24:51.720565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:36.934 [2024-07-25 12:24:51.720580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:36.934 [2024-07-25 12:24:51.730432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:36.934 [2024-07-25 12:24:51.730546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.934 [2024-07-25 12:24:51.730572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cfc50 with addr=10.0.0.2, port=4420 00:17:36.934 [2024-07-25 12:24:51.730584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cfc50 is same with the state(5) to be set 00:17:36.934 [2024-07-25 12:24:51.730630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cfc50 (9): Bad file descriptor 00:17:36.934 [2024-07-25 12:24:51.730650] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:36.934 [2024-07-25 12:24:51.730660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:36.934 [2024-07-25 12:24:51.730671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:36.934 [2024-07-25 12:24:51.730686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.934 [2024-07-25 12:24:51.736367] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:36.934 [2024-07-25 12:24:51.736421] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:36.934 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:17:36.935 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:36.935 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:36.935 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.935 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:36.935 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.935 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:36.935 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:37.193 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.194 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:37.194 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.194 12:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.194 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.452 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:37.452 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:37.452 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:17:37.452 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:17:37.452 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:37.452 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.452 12:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.388 [2024-07-25 12:24:53.089569] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:38.388 [2024-07-25 12:24:53.089612] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:38.388 [2024-07-25 12:24:53.089633] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:38.388 [2024-07-25 12:24:53.175695] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:38.388 [2024-07-25 12:24:53.236361] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:38.388 [2024-07-25 12:24:53.236426] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.388 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.647 2024/07/25 12:24:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:38.647 request: 00:17:38.647 { 00:17:38.647 "method": "bdev_nvme_start_discovery", 00:17:38.647 "params": { 00:17:38.647 "name": "nvme", 00:17:38.647 "trtype": "tcp", 00:17:38.647 "traddr": "10.0.0.2", 00:17:38.647 "adrfam": "ipv4", 00:17:38.647 "trsvcid": "8009", 00:17:38.647 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:38.647 "wait_for_attach": true 00:17:38.647 } 00:17:38.647 } 00:17:38.647 Got JSON-RPC error response 00:17:38.647 GoRPCClient: error on JSON-RPC call 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.647 2024/07/25 12:24:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:38.647 request: 00:17:38.647 { 00:17:38.647 "method": "bdev_nvme_start_discovery", 00:17:38.647 "params": { 00:17:38.647 "name": "nvme_second", 00:17:38.647 "trtype": "tcp", 00:17:38.647 "traddr": "10.0.0.2", 00:17:38.647 "adrfam": "ipv4", 00:17:38.647 "trsvcid": "8009", 00:17:38.647 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:38.647 "wait_for_attach": true 00:17:38.647 } 00:17:38.647 } 00:17:38.647 Got JSON-RPC error response 00:17:38.647 GoRPCClient: error on JSON-RPC call 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.647 12:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.019 [2024-07-25 12:24:54.505049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:40.019 [2024-07-25 12:24:54.505145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120c180 with addr=10.0.0.2, port=8010 00:17:40.019 [2024-07-25 12:24:54.505173] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:40.019 [2024-07-25 12:24:54.505183] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:40.019 [2024-07-25 12:24:54.505194] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:40.951 [2024-07-25 12:24:55.505033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:40.951 [2024-07-25 12:24:55.505120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120c180 with addr=10.0.0.2, port=8010 00:17:40.951 [2024-07-25 12:24:55.505148] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:40.951 [2024-07-25 12:24:55.505159] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:40.951 [2024-07-25 12:24:55.505170] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:41.886 [2024-07-25 12:24:56.504867] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:41.886 2024/07/25 12:24:56 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:17:41.886 request: 00:17:41.886 { 00:17:41.886 "method": "bdev_nvme_start_discovery", 00:17:41.886 "params": { 00:17:41.886 "name": "nvme_second", 00:17:41.886 "trtype": "tcp", 00:17:41.886 "traddr": "10.0.0.2", 00:17:41.886 "adrfam": "ipv4", 00:17:41.886 "trsvcid": "8010", 00:17:41.886 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:41.886 "wait_for_attach": false, 00:17:41.886 "attach_timeout_ms": 3000 00:17:41.886 } 00:17:41.886 } 00:17:41.886 Got JSON-RPC error response 00:17:41.886 GoRPCClient: error on JSON-RPC call 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88101 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.886 rmmod nvme_tcp 00:17:41.886 rmmod nvme_fabrics 00:17:41.886 rmmod nvme_keyring 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88051 ']' 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88051 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 88051 ']' 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 88051 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88051 00:17:41.886 killing process with pid 88051 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88051' 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 88051 00:17:41.886 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 88051 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:42.144 00:17:42.144 real 0m11.184s 00:17:42.144 user 0m22.052s 00:17:42.144 sys 0m1.608s 00:17:42.144 ************************************ 00:17:42.144 END TEST nvmf_host_discovery 00:17:42.144 ************************************ 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:42.144 12:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.402 12:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.403 ************************************ 00:17:42.403 START TEST nvmf_host_multipath_status 00:17:42.403 ************************************ 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:42.403 * Looking for test storage... 00:17:42.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:42.403 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:42.404 Cannot find device "nvmf_tgt_br" 00:17:42.404 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:17:42.404 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.404 Cannot find device "nvmf_tgt_br2" 00:17:42.404 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:17:42.404 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:42.404 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:42.404 Cannot find device "nvmf_tgt_br" 00:17:42.404 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:17:42.404 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:42.404 Cannot find device "nvmf_tgt_br2" 00:17:42.404 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:17:42.404 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:42.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:42.661 00:17:42.661 --- 10.0.0.2 ping statistics --- 00:17:42.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.661 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:42.661 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:42.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:42.661 00:17:42.661 --- 10.0.0.3 ping statistics --- 00:17:42.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.662 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:42.662 00:17:42.662 --- 10.0.0.1 ping statistics --- 00:17:42.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.662 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=88574 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 88574 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 88574 ']' 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:42.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:42.662 12:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:42.919 [2024-07-25 12:24:57.575866] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:17:42.919 [2024-07-25 12:24:57.575998] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.919 [2024-07-25 12:24:57.712162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:43.177 [2024-07-25 12:24:57.832392] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.177 [2024-07-25 12:24:57.832647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.177 [2024-07-25 12:24:57.832980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.177 [2024-07-25 12:24:57.833255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.177 [2024-07-25 12:24:57.833505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.177 [2024-07-25 12:24:57.833703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.177 [2024-07-25 12:24:57.833718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.744 12:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.744 12:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:17:43.744 12:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.744 12:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:43.744 12:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:43.744 12:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.744 12:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=88574 00:17:43.744 12:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:44.002 [2024-07-25 12:24:58.842708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.002 12:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:44.260 Malloc0 00:17:44.519 12:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:44.519 12:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:44.777 12:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.035 [2024-07-25 12:24:59.826092] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.035 12:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:45.293 [2024-07-25 12:25:00.114531] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:45.293 12:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=88686 00:17:45.293 12:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:45.293 12:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:45.293 12:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 88686 /var/tmp/bdevperf.sock 00:17:45.293 12:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 88686 ']' 00:17:45.293 12:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.293 12:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.293 12:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.293 12:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.293 12:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:46.667 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.667 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:17:46.667 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:46.667 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:46.925 Nvme0n1 00:17:46.925 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:47.491 Nvme0n1 00:17:47.491 12:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:47.491 12:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:49.390 12:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:49.390 12:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:49.648 12:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:49.907 12:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:50.843 12:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:50.843 12:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:50.843 12:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.843 12:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:51.101 12:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.101 12:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:51.101 12:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.101 12:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:51.360 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:51.360 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:51.360 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.360 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:51.618 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.618 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:51.618 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:51.618 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.184 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.184 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:52.184 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:52.184 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.184 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.184 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:52.184 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.184 12:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:52.442 12:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.442 12:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:52.442 12:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:52.701 12:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:52.960 12:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:53.896 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:53.896 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:53.896 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.896 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:54.464 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:54.464 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:54.464 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:54.464 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.464 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.464 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:54.464 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:54.465 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.032 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.032 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:55.032 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.032 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:55.032 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.032 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:55.032 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.032 12:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:55.291 12:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.291 12:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:55.291 12:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:55.291 12:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.549 12:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.549 12:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:55.549 12:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:55.807 12:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:56.101 12:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:57.036 12:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:57.036 12:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:57.036 12:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.036 12:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:57.294 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.294 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:57.294 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.294 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:57.553 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:57.553 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:57.811 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.811 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:58.070 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.070 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:58.070 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:58.070 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.328 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.328 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:58.328 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.328 12:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:58.587 12:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.587 12:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:58.587 12:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:58.587 12:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.845 12:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.845 12:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:58.845 12:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:59.102 12:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:59.361 12:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:00.298 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:00.298 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:00.298 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.298 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:00.557 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.557 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:00.557 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.557 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:00.815 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:00.815 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:00.815 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.815 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:01.073 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.073 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:01.073 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.073 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:01.640 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.640 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:01.640 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.640 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:01.899 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.899 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:01.899 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.899 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:02.157 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:02.157 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:02.157 12:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:02.415 12:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:02.673 12:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:03.608 12:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:03.608 12:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:03.608 12:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.608 12:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:04.175 12:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:04.175 12:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:04.175 12:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.175 12:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:04.434 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:04.434 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:04.434 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.434 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:04.692 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.692 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:04.692 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.692 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:04.950 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.950 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:04.950 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.950 12:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:05.209 12:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:05.209 12:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:05.209 12:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.209 12:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:05.776 12:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:05.776 12:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:05.776 12:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:06.035 12:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:06.294 12:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:07.230 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:07.230 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:07.230 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.230 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:07.491 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:07.491 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:07.491 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.491 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:07.753 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.753 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:07.753 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:07.753 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.011 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:08.011 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:08.011 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:08.011 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.580 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:08.580 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:08.580 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.580 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:08.580 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:08.580 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:08.581 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.581 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:09.152 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.152 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:09.410 12:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:09.410 12:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:09.668 12:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:09.926 12:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:10.861 12:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:10.861 12:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:10.861 12:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:10.861 12:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:11.119 12:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:11.119 12:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:11.119 12:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:11.119 12:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.377 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:11.377 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:11.377 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.377 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:11.943 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:11.943 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:11.943 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.943 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:12.201 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.201 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:12.201 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.201 12:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:12.459 12:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.459 12:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:12.459 12:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.459 12:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:12.717 12:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.717 12:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:12.717 12:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:12.974 12:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:13.232 12:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:14.168 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:14.168 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:14.168 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:14.168 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:14.734 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:14.734 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:14.734 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:14.734 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:14.993 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:14.993 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:14.993 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:14.993 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:15.251 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.251 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:15.251 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:15.251 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.509 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.509 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:15.509 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.509 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:15.783 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.783 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:15.783 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.783 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:16.053 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.053 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:16.053 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:16.310 12:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:16.567 12:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:17.938 12:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:17.938 12:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:17.938 12:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:17.938 12:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:17.938 12:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:17.938 12:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:17.938 12:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:17.938 12:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:18.197 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.197 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:18.456 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:18.456 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.713 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.713 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:18.713 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.713 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:18.971 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.971 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:18.971 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:18.971 12:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.229 12:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.229 12:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:19.229 12:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.229 12:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:19.487 12:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.487 12:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:19.487 12:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:19.745 12:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:20.003 12:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:20.937 12:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:20.937 12:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:20.937 12:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.937 12:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:21.295 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.295 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:21.295 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.295 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:21.567 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:21.567 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:21.567 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.567 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:22.133 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.133 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:22.133 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:22.133 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.133 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.133 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:22.133 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.133 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:22.699 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.699 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:22.699 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.699 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 88686 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 88686 ']' 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 88686 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88686 00:18:22.958 killing process with pid 88686 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88686' 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 88686 00:18:22.958 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 88686 00:18:22.958 Connection closed with partial response: 00:18:22.958 00:18:22.958 00:18:23.229 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 88686 00:18:23.229 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:23.229 [2024-07-25 12:25:00.198349] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:18:23.229 [2024-07-25 12:25:00.198557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88686 ] 00:18:23.229 [2024-07-25 12:25:00.346194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.229 [2024-07-25 12:25:00.470893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.229 Running I/O for 90 seconds... 00:18:23.229 [2024-07-25 12:25:17.109179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.229 [2024-07-25 12:25:17.109968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:23.229 [2024-07-25 12:25:17.109989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.110984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.110998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.111018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.111049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.111070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.111085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.111106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.111128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.111775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.230 [2024-07-25 12:25:17.111803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.111835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.230 [2024-07-25 12:25:17.111853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.111878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.230 [2024-07-25 12:25:17.111893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.111917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.230 [2024-07-25 12:25:17.111933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.111957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.230 [2024-07-25 12:25:17.111972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.111996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.230 [2024-07-25 12:25:17.112011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.112036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.230 [2024-07-25 12:25:17.112051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.112076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.230 [2024-07-25 12:25:17.112091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:23.230 [2024-07-25 12:25:17.112115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.112976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.112991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:23.231 [2024-07-25 12:25:17.113861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.231 [2024-07-25 12:25:17.113876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.113903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.113919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.113946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.113961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.113988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.114962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.114989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.115004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.115031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.115046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.115073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.115088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.115123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.115139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.115166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.115181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.115207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.115222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.115249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.115264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.115303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.115321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.115348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.115364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:17.115391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:17.115406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:34.710062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:34.710134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:34.710172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:34.710194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:34.710220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:34.710235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:23.232 [2024-07-25 12:25:34.710255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.232 [2024-07-25 12:25:34.710271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.233 [2024-07-25 12:25:34.710636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.710725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.710739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.233 [2024-07-25 12:25:34.711790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.233 [2024-07-25 12:25:34.711825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.233 [2024-07-25 12:25:34.711860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.233 [2024-07-25 12:25:34.711895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:23.233 [2024-07-25 12:25:34.711915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.233 [2024-07-25 12:25:34.711930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.711951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.711965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.711985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.712000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.712020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.712034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.712054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.712076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.712098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.712113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.712134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.712149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.712169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.712183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.712203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.712218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.712238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.712253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.712274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.712289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.713668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.234 [2024-07-25 12:25:34.713698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.713726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.234 [2024-07-25 12:25:34.713743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.713764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.234 [2024-07-25 12:25:34.713778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.713799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.713814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.713834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.713849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.713869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.713884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.713921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.713937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.713958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.713972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.713993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.714008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.714043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.714079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.714115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.714150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.714191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.714227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.234 [2024-07-25 12:25:34.714268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.234 [2024-07-25 12:25:34.714325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.234 [2024-07-25 12:25:34.714362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.234 [2024-07-25 12:25:34.714408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.234 [2024-07-25 12:25:34.714444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.234 [2024-07-25 12:25:34.714479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:23.234 [2024-07-25 12:25:34.714500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.234 [2024-07-25 12:25:34.714516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.714537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.714551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.714572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.714587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.714990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.715663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.715714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.715751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.715786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.715820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.715855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.715890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.715925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.715959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.715980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.715994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.716014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.716029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.716049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.716064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.716085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.716099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.716119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.716141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.716163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.716178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.716199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.716213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.717429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.717457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.717484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.717501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.717522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.235 [2024-07-25 12:25:34.717537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:23.235 [2024-07-25 12:25:34.717557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.235 [2024-07-25 12:25:34.717572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.717607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.717641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.717676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.717711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.717745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.717780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.717829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.717865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.717900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.717936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.717971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.717993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.718008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.718043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.718077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.718113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.718148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.718182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.718217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.718261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.718320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.718359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.718394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.718429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.718472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.718507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.718528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.718543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.720173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.720218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.720254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.720289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.720356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.720393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.720427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.236 [2024-07-25 12:25:34.720462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.720497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.720531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.720566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.236 [2024-07-25 12:25:34.720600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:23.236 [2024-07-25 12:25:34.720620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.720635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.720656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.720670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.720701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.720719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.720740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.720755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.720775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.720798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.720820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.720834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.720855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.720869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.720889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.720903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.720923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.720938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.720958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.720972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.721016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.721051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.721085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.721120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.721645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.721687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.721740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.721797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.721832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.721867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.721902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.721937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.721973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.721993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.722007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.722043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.722438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.722480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.722516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.722551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.722600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.722634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.722669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.722705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.722739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.722774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.722809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.722843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.237 [2024-07-25 12:25:34.722877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.722897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.237 [2024-07-25 12:25:34.722912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:23.237 [2024-07-25 12:25:34.723425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.723451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.723477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.723494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.723527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.238 [2024-07-25 12:25:34.723543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.723563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.238 [2024-07-25 12:25:34.723578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.723598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.238 [2024-07-25 12:25:34.723612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.723633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.238 [2024-07-25 12:25:34.723647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.723933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.723959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.723984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.238 [2024-07-25 12:25:34.724071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.238 [2024-07-25 12:25:34.724177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.724902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.238 [2024-07-25 12:25:34.724937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.238 [2024-07-25 12:25:34.724971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.724991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.725006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.725026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.725050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.725071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.238 [2024-07-25 12:25:34.725086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.725106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.725120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.725141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.725167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.729376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.729406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.729433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.729450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.729471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.729486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.729506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.729521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.729541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.729556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.729576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.238 [2024-07-25 12:25:34.729590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:23.238 [2024-07-25 12:25:34.729610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.239 [2024-07-25 12:25:34.729624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.729645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.729659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.729680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.239 [2024-07-25 12:25:34.729694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.729714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.729729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.729750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.239 [2024-07-25 12:25:34.729764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.730973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.730993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.239 [2024-07-25 12:25:34.731538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.239 [2024-07-25 12:25:34.731572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:23.239 [2024-07-25 12:25:34.731592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.239 [2024-07-25 12:25:34.731606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.731626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.731640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.731660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.731675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.731694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.731708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.731728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.731751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.731772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.731786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.731807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.731821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.731842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.731858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.732543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.240 [2024-07-25 12:25:34.732570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.732595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.240 [2024-07-25 12:25:34.732612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.732633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.240 [2024-07-25 12:25:34.732648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.732668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.240 [2024-07-25 12:25:34.732682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.732715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.732731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.732752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.732766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.734723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.240 [2024-07-25 12:25:34.734750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.734776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.734793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.734814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.734841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.734864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.734878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.734899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.734913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.734933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.734947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.734967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.734981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.240 [2024-07-25 12:25:34.735497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.240 [2024-07-25 12:25:34.735599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.240 [2024-07-25 12:25:34.735634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.240 [2024-07-25 12:25:34.735654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.735669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.735689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.735702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.735732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.735747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.736951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.736979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.737415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.737451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.737486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.737520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.737554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.737588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.737622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.737657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.737924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.737966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.737986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.738001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.738036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.738081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.738118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.738153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.738187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.738221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.738256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.738303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.241 [2024-07-25 12:25:34.738343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.738377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.738420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.738455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.738475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.738489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:23.241 [2024-07-25 12:25:34.739686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.241 [2024-07-25 12:25:34.739724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.739752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.739769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.739790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.739804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.739825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.739840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.739860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.739875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.739895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.739909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.739929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.739944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.739964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.739978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.739998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.740047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.740738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.740773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.740807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.740828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.740842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.742059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.742087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.742113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.742130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.742151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.742166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.742186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.742201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.742221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.742235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.742255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.742269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.742316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.242 [2024-07-25 12:25:34.742335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.742356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.742370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:23.242 [2024-07-25 12:25:34.742390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.242 [2024-07-25 12:25:34.742405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.742425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.742440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.742460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.742474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.742494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.742508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.742529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.742543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.743754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.743781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.743808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.743824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.743845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.743860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.743880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.743894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.743915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.743929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.743949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.743975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.743997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.243 [2024-07-25 12:25:34.744630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:23.243 [2024-07-25 12:25:34.744814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.243 [2024-07-25 12:25:34.744828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.744848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.744862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.744891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.744907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.744927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.744941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.744961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.744976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.744996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.745010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.745031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.745045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.745065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.745079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.745099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.745114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.745134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.745148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.745169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.745183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.748635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.748664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.748701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.748721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.748743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.748762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.748799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.748815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.748835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.748850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.748870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.748884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.748905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.748919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.748940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.748954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.748974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.748988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.749024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.749059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.749093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.749128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.749162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.749197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.749241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.749275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.749332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.749368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.749402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.749437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.749471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.749505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.749540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.749574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.749608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.244 [2024-07-25 12:25:34.749643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.749686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:23.244 [2024-07-25 12:25:34.749708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.244 [2024-07-25 12:25:34.749723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.749743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.749757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.749777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.749791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.749812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.749826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.749846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.749860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.749880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.749895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.749915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.749929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.749949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.749963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.749984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.749998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.750018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.750032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.750052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.750066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.750086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.750108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.750129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.750143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.750164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.750179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.750199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.750213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.750233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.750247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.750268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.750282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.750319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.750336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.752319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.752364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.752399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.752435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.752469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.752504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.752949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.752978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.752994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.753014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.245 [2024-07-25 12:25:34.753029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.753049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.245 [2024-07-25 12:25:34.753063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:23.245 [2024-07-25 12:25:34.753083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.753098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.753118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.753132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.753152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.753167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.753187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.753201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.753221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.753235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.753256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.753270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.753306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.753350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.753372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.753387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.753408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.753423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.754844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.754881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.754907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.754924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.754944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.754958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.754977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.754991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.246 [2024-07-25 12:25:34.755881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.246 [2024-07-25 12:25:34.755948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:23.246 [2024-07-25 12:25:34.755967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.755981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.756000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.756014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.756034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.756047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.756067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.756080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.756100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.756114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.756134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.756148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.756943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.756970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.757008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.757026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.757046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.757061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.757082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.757096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.757116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.757131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.757151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.757165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.757185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.757200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.757220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.757234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.757254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.757268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.757289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.757319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.758787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.758815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.758840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.758857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.758878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.758893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.758925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.758941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.758961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.758976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.758996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.759010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.759031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.759045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.759065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.759079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.759099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.759114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.759134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.759148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.759168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.759183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.759203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.759218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.759238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.247 [2024-07-25 12:25:34.759253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.759273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.759288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.247 [2024-07-25 12:25:34.759326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.247 [2024-07-25 12:25:34.759341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.759421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.759455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.759488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.759694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.759728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.759762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.759909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.759977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.759997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.760011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.760031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.760045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.760065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.760079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.760099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.760113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.760133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.760148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.760168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.760182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.760203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.760224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.760985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.761013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.761039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.761055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.761077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.761092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.761112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.761127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.761147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.761162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.761182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.248 [2024-07-25 12:25:34.761197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.761217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.761231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.761252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.761282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.761302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.761316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.761350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.761367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.761387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.761401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.764184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.764222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:23.248 [2024-07-25 12:25:34.764281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.248 [2024-07-25 12:25:34.764314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.764466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.764499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.764533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.764566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.764666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.764777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.764916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.764970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.764985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.765019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.765054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.765157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.765199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.765236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.765388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.765423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.249 [2024-07-25 12:25:34.765458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.249 [2024-07-25 12:25:34.765673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:23.249 [2024-07-25 12:25:34.765694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.765708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.765728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.765742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.765762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.765776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.765796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.765810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.765830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.765844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.765879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.250 [2024-07-25 12:25:34.765893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.765913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.250 [2024-07-25 12:25:34.765927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.250 [2024-07-25 12:25:34.768492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.250 [2024-07-25 12:25:34.768578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.768613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.768664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.768730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.768766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.768801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.768835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.250 [2024-07-25 12:25:34.768870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.250 [2024-07-25 12:25:34.768904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.768939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.768973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.768993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.769008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.769028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.769042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.769062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.769076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.769096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.769111] nvme_qpair.c: 474:spdk_nvme_print 12:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.250 _completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:23.250 [2024-07-25 12:25:34.769143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.250 [2024-07-25 12:25:34.769158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:23.250 Received shutdown signal, test time was about 35.435371 seconds 00:18:23.250 00:18:23.250 Latency(us) 00:18:23.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.250 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:23.250 Verification LBA range: start 0x0 length 0x4000 00:18:23.250 Nvme0n1 : 35.43 8494.37 33.18 0.00 0.00 15037.44 1027.72 4026531.84 00:18:23.250 =================================================================================================================== 00:18:23.250 Total : 8494.37 33.18 0.00 0.00 15037.44 1027.72 4026531.84 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.509 rmmod nvme_tcp 00:18:23.509 rmmod nvme_fabrics 00:18:23.509 rmmod nvme_keyring 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 88574 ']' 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 88574 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 88574 ']' 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 88574 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88574 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:23.509 killing process with pid 88574 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88574' 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 88574 00:18:23.509 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 88574 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:23.767 ************************************ 00:18:23.767 END TEST nvmf_host_multipath_status 00:18:23.767 ************************************ 00:18:23.767 00:18:23.767 real 0m41.534s 00:18:23.767 user 2m16.416s 00:18:23.767 sys 0m10.388s 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:23.767 12:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.025 ************************************ 00:18:24.026 START TEST nvmf_discovery_remove_ifc 00:18:24.026 ************************************ 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:24.026 * Looking for test storage... 00:18:24.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:24.026 Cannot find device "nvmf_tgt_br" 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.026 Cannot find device "nvmf_tgt_br2" 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:24.026 Cannot find device "nvmf_tgt_br" 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:24.026 Cannot find device "nvmf_tgt_br2" 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.026 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:24.027 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.027 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:24.027 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:24.027 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:24.285 12:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:24.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:18:24.285 00:18:24.285 --- 10.0.0.2 ping statistics --- 00:18:24.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.285 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:24.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:24.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:18:24.285 00:18:24.285 --- 10.0.0.3 ping statistics --- 00:18:24.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.285 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:24.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:18:24.285 00:18:24.285 --- 10.0.0.1 ping statistics --- 00:18:24.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.285 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.285 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90009 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90009 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 90009 ']' 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.286 12:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:24.286 [2024-07-25 12:25:39.146319] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:18:24.286 [2024-07-25 12:25:39.146437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.544 [2024-07-25 12:25:39.287594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.544 [2024-07-25 12:25:39.404557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.544 [2024-07-25 12:25:39.404894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.544 [2024-07-25 12:25:39.404973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.544 [2024-07-25 12:25:39.405045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.544 [2024-07-25 12:25:39.405119] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.544 [2024-07-25 12:25:39.405205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:25.478 [2024-07-25 12:25:40.124157] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.478 [2024-07-25 12:25:40.132383] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:25.478 null0 00:18:25.478 [2024-07-25 12:25:40.164162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90059 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90059 /tmp/host.sock 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 90059 ']' 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:25.478 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:25.478 12:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:25.478 [2024-07-25 12:25:40.240022] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:18:25.478 [2024-07-25 12:25:40.240129] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90059 ] 00:18:25.737 [2024-07-25 12:25:40.375734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.737 [2024-07-25 12:25:40.481587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.671 12:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:27.642 [2024-07-25 12:25:42.397563] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:27.642 [2024-07-25 12:25:42.397606] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:27.642 [2024-07-25 12:25:42.397656] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:27.642 [2024-07-25 12:25:42.483742] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:27.900 [2024-07-25 12:25:42.540905] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:27.900 [2024-07-25 12:25:42.540980] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:27.900 [2024-07-25 12:25:42.541017] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:27.900 [2024-07-25 12:25:42.541036] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:27.900 [2024-07-25 12:25:42.541064] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:27.900 [2024-07-25 12:25:42.546257] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19bc650 was disconnected and freed. delete nvme_qpair. 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:27.900 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.901 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:27.901 12:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:28.835 12:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:28.835 12:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:28.835 12:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:28.835 12:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.835 12:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:28.835 12:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.835 12:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:29.094 12:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.094 12:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:29.094 12:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:30.029 12:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:30.029 12:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.029 12:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:30.029 12:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:30.029 12:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:30.029 12:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.029 12:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:30.029 12:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.029 12:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:30.029 12:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:30.989 12:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:30.989 12:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.989 12:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:30.989 12:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.989 12:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:30.989 12:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:30.989 12:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:30.989 12:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.989 12:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:30.989 12:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:32.365 12:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:32.365 12:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:32.365 12:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:32.365 12:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:32.365 12:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.365 12:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:32.365 12:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:32.365 12:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.365 12:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:32.365 12:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:33.302 12:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:33.302 12:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.302 12:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:33.302 12:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.302 12:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:33.302 12:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:33.302 12:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:33.302 12:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.302 [2024-07-25 12:25:47.969407] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:33.302 [2024-07-25 12:25:47.969487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.302 [2024-07-25 12:25:47.969505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.302 [2024-07-25 12:25:47.969520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.302 [2024-07-25 12:25:47.969530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.302 [2024-07-25 12:25:47.969540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.302 [2024-07-25 12:25:47.969550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.302 [2024-07-25 12:25:47.969560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.302 [2024-07-25 12:25:47.969570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.302 [2024-07-25 12:25:47.969580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.302 [2024-07-25 12:25:47.969589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.302 [2024-07-25 12:25:47.969599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1985900 is same with the state(5) to be set 00:18:33.302 12:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:33.302 12:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:33.302 [2024-07-25 12:25:47.979399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1985900 (9): Bad file descriptor 00:18:33.302 [2024-07-25 12:25:47.989438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:34.238 12:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:34.238 12:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:34.238 12:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:34.238 12:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:34.238 12:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.238 12:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:34.238 12:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:34.238 [2024-07-25 12:25:49.020353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:34.238 [2024-07-25 12:25:49.020421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1985900 with addr=10.0.0.2, port=4420 00:18:34.238 [2024-07-25 12:25:49.020443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1985900 is same with the state(5) to be set 00:18:34.238 [2024-07-25 12:25:49.020489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1985900 (9): Bad file descriptor 00:18:34.238 [2024-07-25 12:25:49.020943] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:34.239 [2024-07-25 12:25:49.020994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:34.239 [2024-07-25 12:25:49.021008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:34.239 [2024-07-25 12:25:49.021020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:34.239 [2024-07-25 12:25:49.021046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.239 [2024-07-25 12:25:49.021058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:34.239 12:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.239 12:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:34.239 12:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:35.201 [2024-07-25 12:25:50.021107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:35.201 [2024-07-25 12:25:50.021187] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:35.201 [2024-07-25 12:25:50.021201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:35.201 [2024-07-25 12:25:50.021213] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:18:35.201 [2024-07-25 12:25:50.021238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.201 [2024-07-25 12:25:50.021270] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:35.201 [2024-07-25 12:25:50.021351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.201 [2024-07-25 12:25:50.021370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.201 [2024-07-25 12:25:50.021385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.201 [2024-07-25 12:25:50.021395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.201 [2024-07-25 12:25:50.021405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.201 [2024-07-25 12:25:50.021414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.201 [2024-07-25 12:25:50.021424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.201 [2024-07-25 12:25:50.021433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.201 [2024-07-25 12:25:50.021444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.201 [2024-07-25 12:25:50.021453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.201 [2024-07-25 12:25:50.021463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:35.201 [2024-07-25 12:25:50.021950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19283e0 (9): Bad file descriptor 00:18:35.201 [2024-07-25 12:25:50.022963] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:35.201 [2024-07-25 12:25:50.023004] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:35.201 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:35.201 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.201 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:35.201 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.201 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:35.201 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:35.201 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:35.472 12:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:36.406 12:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:36.406 12:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.406 12:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.406 12:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:36.406 12:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:36.406 12:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:36.406 12:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:36.406 12:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.407 12:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:36.407 12:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:37.343 [2024-07-25 12:25:52.029676] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:37.343 [2024-07-25 12:25:52.029712] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:37.343 [2024-07-25 12:25:52.029732] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:37.343 [2024-07-25 12:25:52.115857] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:37.343 [2024-07-25 12:25:52.172341] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:37.343 [2024-07-25 12:25:52.172405] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:37.343 [2024-07-25 12:25:52.172431] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:37.344 [2024-07-25 12:25:52.172449] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:37.344 [2024-07-25 12:25:52.172459] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:37.344 [2024-07-25 12:25:52.178296] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19a1400 was disconnected and freed. delete nvme_qpair. 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90059 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 90059 ']' 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 90059 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90059 00:18:37.602 killing process with pid 90059 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90059' 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 90059 00:18:37.602 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 90059 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:37.861 rmmod nvme_tcp 00:18:37.861 rmmod nvme_fabrics 00:18:37.861 rmmod nvme_keyring 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90009 ']' 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90009 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 90009 ']' 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 90009 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90009 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:37.861 killing process with pid 90009 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90009' 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 90009 00:18:37.861 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 90009 00:18:38.119 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:38.119 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:38.119 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:38.119 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:38.119 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:38.119 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.119 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.119 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.379 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:38.379 00:18:38.379 real 0m14.358s 00:18:38.379 user 0m25.786s 00:18:38.379 sys 0m1.675s 00:18:38.379 ************************************ 00:18:38.379 END TEST nvmf_discovery_remove_ifc 00:18:38.379 ************************************ 00:18:38.379 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:38.379 12:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.379 ************************************ 00:18:38.379 START TEST nvmf_identify_kernel_target 00:18:38.379 ************************************ 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:38.379 * Looking for test storage... 00:18:38.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:38.379 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:38.380 Cannot find device "nvmf_tgt_br" 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:38.380 Cannot find device "nvmf_tgt_br2" 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:38.380 Cannot find device "nvmf_tgt_br" 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:38.380 Cannot find device "nvmf_tgt_br2" 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:18:38.380 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:38.639 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:38.639 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:38.639 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:38.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:18:38.640 00:18:38.640 --- 10.0.0.2 ping statistics --- 00:18:38.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.640 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:38.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:38.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:18:38.640 00:18:38.640 --- 10.0.0.3 ping statistics --- 00:18:38.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.640 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:38.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:38.640 00:18:38.640 --- 10.0.0.1 ping statistics --- 00:18:38.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.640 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:38.640 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:38.898 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:38.898 12:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:39.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:39.155 Waiting for block devices as requested 00:18:39.155 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:39.412 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:39.413 No valid GPT data, bailing 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:39.413 No valid GPT data, bailing 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:39.413 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:39.671 No valid GPT data, bailing 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:39.671 No valid GPT data, bailing 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -a 10.0.0.1 -t tcp -s 4420 00:18:39.671 00:18:39.671 Discovery Log Number of Records 2, Generation counter 2 00:18:39.671 =====Discovery Log Entry 0====== 00:18:39.671 trtype: tcp 00:18:39.671 adrfam: ipv4 00:18:39.671 subtype: current discovery subsystem 00:18:39.671 treq: not specified, sq flow control disable supported 00:18:39.671 portid: 1 00:18:39.671 trsvcid: 4420 00:18:39.671 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:39.671 traddr: 10.0.0.1 00:18:39.671 eflags: none 00:18:39.671 sectype: none 00:18:39.671 =====Discovery Log Entry 1====== 00:18:39.671 trtype: tcp 00:18:39.671 adrfam: ipv4 00:18:39.671 subtype: nvme subsystem 00:18:39.671 treq: not specified, sq flow control disable supported 00:18:39.671 portid: 1 00:18:39.671 trsvcid: 4420 00:18:39.671 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:39.671 traddr: 10.0.0.1 00:18:39.671 eflags: none 00:18:39.671 sectype: none 00:18:39.671 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:39.671 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:39.929 ===================================================== 00:18:39.929 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:39.929 ===================================================== 00:18:39.929 Controller Capabilities/Features 00:18:39.929 ================================ 00:18:39.929 Vendor ID: 0000 00:18:39.929 Subsystem Vendor ID: 0000 00:18:39.929 Serial Number: 58e5e202bcaed2030743 00:18:39.929 Model Number: Linux 00:18:39.929 Firmware Version: 6.7.0-68 00:18:39.929 Recommended Arb Burst: 0 00:18:39.929 IEEE OUI Identifier: 00 00 00 00:18:39.929 Multi-path I/O 00:18:39.929 May have multiple subsystem ports: No 00:18:39.929 May have multiple controllers: No 00:18:39.929 Associated with SR-IOV VF: No 00:18:39.929 Max Data Transfer Size: Unlimited 00:18:39.929 Max Number of Namespaces: 0 00:18:39.929 Max Number of I/O Queues: 1024 00:18:39.929 NVMe Specification Version (VS): 1.3 00:18:39.929 NVMe Specification Version (Identify): 1.3 00:18:39.929 Maximum Queue Entries: 1024 00:18:39.929 Contiguous Queues Required: No 00:18:39.929 Arbitration Mechanisms Supported 00:18:39.929 Weighted Round Robin: Not Supported 00:18:39.929 Vendor Specific: Not Supported 00:18:39.929 Reset Timeout: 7500 ms 00:18:39.929 Doorbell Stride: 4 bytes 00:18:39.929 NVM Subsystem Reset: Not Supported 00:18:39.929 Command Sets Supported 00:18:39.929 NVM Command Set: Supported 00:18:39.929 Boot Partition: Not Supported 00:18:39.929 Memory Page Size Minimum: 4096 bytes 00:18:39.929 Memory Page Size Maximum: 4096 bytes 00:18:39.929 Persistent Memory Region: Not Supported 00:18:39.929 Optional Asynchronous Events Supported 00:18:39.929 Namespace Attribute Notices: Not Supported 00:18:39.929 Firmware Activation Notices: Not Supported 00:18:39.929 ANA Change Notices: Not Supported 00:18:39.929 PLE Aggregate Log Change Notices: Not Supported 00:18:39.929 LBA Status Info Alert Notices: Not Supported 00:18:39.929 EGE Aggregate Log Change Notices: Not Supported 00:18:39.929 Normal NVM Subsystem Shutdown event: Not Supported 00:18:39.929 Zone Descriptor Change Notices: Not Supported 00:18:39.930 Discovery Log Change Notices: Supported 00:18:39.930 Controller Attributes 00:18:39.930 128-bit Host Identifier: Not Supported 00:18:39.930 Non-Operational Permissive Mode: Not Supported 00:18:39.930 NVM Sets: Not Supported 00:18:39.930 Read Recovery Levels: Not Supported 00:18:39.930 Endurance Groups: Not Supported 00:18:39.930 Predictable Latency Mode: Not Supported 00:18:39.930 Traffic Based Keep ALive: Not Supported 00:18:39.930 Namespace Granularity: Not Supported 00:18:39.930 SQ Associations: Not Supported 00:18:39.930 UUID List: Not Supported 00:18:39.930 Multi-Domain Subsystem: Not Supported 00:18:39.930 Fixed Capacity Management: Not Supported 00:18:39.930 Variable Capacity Management: Not Supported 00:18:39.930 Delete Endurance Group: Not Supported 00:18:39.930 Delete NVM Set: Not Supported 00:18:39.930 Extended LBA Formats Supported: Not Supported 00:18:39.930 Flexible Data Placement Supported: Not Supported 00:18:39.930 00:18:39.930 Controller Memory Buffer Support 00:18:39.930 ================================ 00:18:39.930 Supported: No 00:18:39.930 00:18:39.930 Persistent Memory Region Support 00:18:39.930 ================================ 00:18:39.930 Supported: No 00:18:39.930 00:18:39.930 Admin Command Set Attributes 00:18:39.930 ============================ 00:18:39.930 Security Send/Receive: Not Supported 00:18:39.930 Format NVM: Not Supported 00:18:39.930 Firmware Activate/Download: Not Supported 00:18:39.930 Namespace Management: Not Supported 00:18:39.930 Device Self-Test: Not Supported 00:18:39.930 Directives: Not Supported 00:18:39.930 NVMe-MI: Not Supported 00:18:39.930 Virtualization Management: Not Supported 00:18:39.930 Doorbell Buffer Config: Not Supported 00:18:39.930 Get LBA Status Capability: Not Supported 00:18:39.930 Command & Feature Lockdown Capability: Not Supported 00:18:39.930 Abort Command Limit: 1 00:18:39.930 Async Event Request Limit: 1 00:18:39.930 Number of Firmware Slots: N/A 00:18:39.930 Firmware Slot 1 Read-Only: N/A 00:18:39.930 Firmware Activation Without Reset: N/A 00:18:39.930 Multiple Update Detection Support: N/A 00:18:39.930 Firmware Update Granularity: No Information Provided 00:18:39.930 Per-Namespace SMART Log: No 00:18:39.930 Asymmetric Namespace Access Log Page: Not Supported 00:18:39.930 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:39.930 Command Effects Log Page: Not Supported 00:18:39.930 Get Log Page Extended Data: Supported 00:18:39.930 Telemetry Log Pages: Not Supported 00:18:39.930 Persistent Event Log Pages: Not Supported 00:18:39.930 Supported Log Pages Log Page: May Support 00:18:39.930 Commands Supported & Effects Log Page: Not Supported 00:18:39.930 Feature Identifiers & Effects Log Page:May Support 00:18:39.930 NVMe-MI Commands & Effects Log Page: May Support 00:18:39.930 Data Area 4 for Telemetry Log: Not Supported 00:18:39.930 Error Log Page Entries Supported: 1 00:18:39.930 Keep Alive: Not Supported 00:18:39.930 00:18:39.930 NVM Command Set Attributes 00:18:39.930 ========================== 00:18:39.930 Submission Queue Entry Size 00:18:39.930 Max: 1 00:18:39.930 Min: 1 00:18:39.930 Completion Queue Entry Size 00:18:39.930 Max: 1 00:18:39.930 Min: 1 00:18:39.930 Number of Namespaces: 0 00:18:39.930 Compare Command: Not Supported 00:18:39.930 Write Uncorrectable Command: Not Supported 00:18:39.930 Dataset Management Command: Not Supported 00:18:39.930 Write Zeroes Command: Not Supported 00:18:39.930 Set Features Save Field: Not Supported 00:18:39.930 Reservations: Not Supported 00:18:39.930 Timestamp: Not Supported 00:18:39.930 Copy: Not Supported 00:18:39.930 Volatile Write Cache: Not Present 00:18:39.930 Atomic Write Unit (Normal): 1 00:18:39.930 Atomic Write Unit (PFail): 1 00:18:39.930 Atomic Compare & Write Unit: 1 00:18:39.930 Fused Compare & Write: Not Supported 00:18:39.930 Scatter-Gather List 00:18:39.930 SGL Command Set: Supported 00:18:39.930 SGL Keyed: Not Supported 00:18:39.930 SGL Bit Bucket Descriptor: Not Supported 00:18:39.930 SGL Metadata Pointer: Not Supported 00:18:39.930 Oversized SGL: Not Supported 00:18:39.930 SGL Metadata Address: Not Supported 00:18:39.930 SGL Offset: Supported 00:18:39.930 Transport SGL Data Block: Not Supported 00:18:39.930 Replay Protected Memory Block: Not Supported 00:18:39.930 00:18:39.930 Firmware Slot Information 00:18:39.930 ========================= 00:18:39.930 Active slot: 0 00:18:39.930 00:18:39.930 00:18:39.930 Error Log 00:18:39.930 ========= 00:18:39.930 00:18:39.930 Active Namespaces 00:18:39.930 ================= 00:18:39.930 Discovery Log Page 00:18:39.930 ================== 00:18:39.930 Generation Counter: 2 00:18:39.930 Number of Records: 2 00:18:39.930 Record Format: 0 00:18:39.930 00:18:39.930 Discovery Log Entry 0 00:18:39.930 ---------------------- 00:18:39.930 Transport Type: 3 (TCP) 00:18:39.930 Address Family: 1 (IPv4) 00:18:39.930 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:39.930 Entry Flags: 00:18:39.930 Duplicate Returned Information: 0 00:18:39.930 Explicit Persistent Connection Support for Discovery: 0 00:18:39.930 Transport Requirements: 00:18:39.930 Secure Channel: Not Specified 00:18:39.930 Port ID: 1 (0x0001) 00:18:39.930 Controller ID: 65535 (0xffff) 00:18:39.930 Admin Max SQ Size: 32 00:18:39.930 Transport Service Identifier: 4420 00:18:39.930 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:39.930 Transport Address: 10.0.0.1 00:18:39.930 Discovery Log Entry 1 00:18:39.930 ---------------------- 00:18:39.930 Transport Type: 3 (TCP) 00:18:39.930 Address Family: 1 (IPv4) 00:18:39.930 Subsystem Type: 2 (NVM Subsystem) 00:18:39.930 Entry Flags: 00:18:39.930 Duplicate Returned Information: 0 00:18:39.930 Explicit Persistent Connection Support for Discovery: 0 00:18:39.930 Transport Requirements: 00:18:39.930 Secure Channel: Not Specified 00:18:39.930 Port ID: 1 (0x0001) 00:18:39.930 Controller ID: 65535 (0xffff) 00:18:39.930 Admin Max SQ Size: 32 00:18:39.930 Transport Service Identifier: 4420 00:18:39.930 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:39.930 Transport Address: 10.0.0.1 00:18:39.930 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:40.190 get_feature(0x01) failed 00:18:40.190 get_feature(0x02) failed 00:18:40.190 get_feature(0x04) failed 00:18:40.190 ===================================================== 00:18:40.190 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:40.190 ===================================================== 00:18:40.190 Controller Capabilities/Features 00:18:40.190 ================================ 00:18:40.190 Vendor ID: 0000 00:18:40.190 Subsystem Vendor ID: 0000 00:18:40.190 Serial Number: 3b9a1a6698058b4f8c3f 00:18:40.190 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:40.190 Firmware Version: 6.7.0-68 00:18:40.190 Recommended Arb Burst: 6 00:18:40.190 IEEE OUI Identifier: 00 00 00 00:18:40.190 Multi-path I/O 00:18:40.190 May have multiple subsystem ports: Yes 00:18:40.190 May have multiple controllers: Yes 00:18:40.190 Associated with SR-IOV VF: No 00:18:40.190 Max Data Transfer Size: Unlimited 00:18:40.190 Max Number of Namespaces: 1024 00:18:40.190 Max Number of I/O Queues: 128 00:18:40.190 NVMe Specification Version (VS): 1.3 00:18:40.190 NVMe Specification Version (Identify): 1.3 00:18:40.190 Maximum Queue Entries: 1024 00:18:40.190 Contiguous Queues Required: No 00:18:40.190 Arbitration Mechanisms Supported 00:18:40.190 Weighted Round Robin: Not Supported 00:18:40.190 Vendor Specific: Not Supported 00:18:40.190 Reset Timeout: 7500 ms 00:18:40.190 Doorbell Stride: 4 bytes 00:18:40.190 NVM Subsystem Reset: Not Supported 00:18:40.190 Command Sets Supported 00:18:40.190 NVM Command Set: Supported 00:18:40.190 Boot Partition: Not Supported 00:18:40.190 Memory Page Size Minimum: 4096 bytes 00:18:40.190 Memory Page Size Maximum: 4096 bytes 00:18:40.190 Persistent Memory Region: Not Supported 00:18:40.190 Optional Asynchronous Events Supported 00:18:40.190 Namespace Attribute Notices: Supported 00:18:40.190 Firmware Activation Notices: Not Supported 00:18:40.190 ANA Change Notices: Supported 00:18:40.190 PLE Aggregate Log Change Notices: Not Supported 00:18:40.190 LBA Status Info Alert Notices: Not Supported 00:18:40.190 EGE Aggregate Log Change Notices: Not Supported 00:18:40.190 Normal NVM Subsystem Shutdown event: Not Supported 00:18:40.190 Zone Descriptor Change Notices: Not Supported 00:18:40.190 Discovery Log Change Notices: Not Supported 00:18:40.190 Controller Attributes 00:18:40.190 128-bit Host Identifier: Supported 00:18:40.190 Non-Operational Permissive Mode: Not Supported 00:18:40.190 NVM Sets: Not Supported 00:18:40.190 Read Recovery Levels: Not Supported 00:18:40.190 Endurance Groups: Not Supported 00:18:40.190 Predictable Latency Mode: Not Supported 00:18:40.190 Traffic Based Keep ALive: Supported 00:18:40.190 Namespace Granularity: Not Supported 00:18:40.190 SQ Associations: Not Supported 00:18:40.190 UUID List: Not Supported 00:18:40.190 Multi-Domain Subsystem: Not Supported 00:18:40.190 Fixed Capacity Management: Not Supported 00:18:40.190 Variable Capacity Management: Not Supported 00:18:40.190 Delete Endurance Group: Not Supported 00:18:40.190 Delete NVM Set: Not Supported 00:18:40.191 Extended LBA Formats Supported: Not Supported 00:18:40.191 Flexible Data Placement Supported: Not Supported 00:18:40.191 00:18:40.191 Controller Memory Buffer Support 00:18:40.191 ================================ 00:18:40.191 Supported: No 00:18:40.191 00:18:40.191 Persistent Memory Region Support 00:18:40.191 ================================ 00:18:40.191 Supported: No 00:18:40.191 00:18:40.191 Admin Command Set Attributes 00:18:40.191 ============================ 00:18:40.191 Security Send/Receive: Not Supported 00:18:40.191 Format NVM: Not Supported 00:18:40.191 Firmware Activate/Download: Not Supported 00:18:40.191 Namespace Management: Not Supported 00:18:40.191 Device Self-Test: Not Supported 00:18:40.191 Directives: Not Supported 00:18:40.191 NVMe-MI: Not Supported 00:18:40.191 Virtualization Management: Not Supported 00:18:40.191 Doorbell Buffer Config: Not Supported 00:18:40.191 Get LBA Status Capability: Not Supported 00:18:40.191 Command & Feature Lockdown Capability: Not Supported 00:18:40.191 Abort Command Limit: 4 00:18:40.191 Async Event Request Limit: 4 00:18:40.191 Number of Firmware Slots: N/A 00:18:40.191 Firmware Slot 1 Read-Only: N/A 00:18:40.191 Firmware Activation Without Reset: N/A 00:18:40.191 Multiple Update Detection Support: N/A 00:18:40.191 Firmware Update Granularity: No Information Provided 00:18:40.191 Per-Namespace SMART Log: Yes 00:18:40.191 Asymmetric Namespace Access Log Page: Supported 00:18:40.191 ANA Transition Time : 10 sec 00:18:40.191 00:18:40.191 Asymmetric Namespace Access Capabilities 00:18:40.191 ANA Optimized State : Supported 00:18:40.191 ANA Non-Optimized State : Supported 00:18:40.191 ANA Inaccessible State : Supported 00:18:40.191 ANA Persistent Loss State : Supported 00:18:40.191 ANA Change State : Supported 00:18:40.191 ANAGRPID is not changed : No 00:18:40.191 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:40.191 00:18:40.191 ANA Group Identifier Maximum : 128 00:18:40.191 Number of ANA Group Identifiers : 128 00:18:40.191 Max Number of Allowed Namespaces : 1024 00:18:40.191 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:40.191 Command Effects Log Page: Supported 00:18:40.191 Get Log Page Extended Data: Supported 00:18:40.191 Telemetry Log Pages: Not Supported 00:18:40.191 Persistent Event Log Pages: Not Supported 00:18:40.191 Supported Log Pages Log Page: May Support 00:18:40.191 Commands Supported & Effects Log Page: Not Supported 00:18:40.191 Feature Identifiers & Effects Log Page:May Support 00:18:40.191 NVMe-MI Commands & Effects Log Page: May Support 00:18:40.191 Data Area 4 for Telemetry Log: Not Supported 00:18:40.191 Error Log Page Entries Supported: 128 00:18:40.191 Keep Alive: Supported 00:18:40.191 Keep Alive Granularity: 1000 ms 00:18:40.191 00:18:40.191 NVM Command Set Attributes 00:18:40.191 ========================== 00:18:40.191 Submission Queue Entry Size 00:18:40.191 Max: 64 00:18:40.191 Min: 64 00:18:40.191 Completion Queue Entry Size 00:18:40.191 Max: 16 00:18:40.191 Min: 16 00:18:40.191 Number of Namespaces: 1024 00:18:40.191 Compare Command: Not Supported 00:18:40.191 Write Uncorrectable Command: Not Supported 00:18:40.191 Dataset Management Command: Supported 00:18:40.191 Write Zeroes Command: Supported 00:18:40.191 Set Features Save Field: Not Supported 00:18:40.191 Reservations: Not Supported 00:18:40.191 Timestamp: Not Supported 00:18:40.191 Copy: Not Supported 00:18:40.191 Volatile Write Cache: Present 00:18:40.191 Atomic Write Unit (Normal): 1 00:18:40.191 Atomic Write Unit (PFail): 1 00:18:40.191 Atomic Compare & Write Unit: 1 00:18:40.191 Fused Compare & Write: Not Supported 00:18:40.191 Scatter-Gather List 00:18:40.191 SGL Command Set: Supported 00:18:40.191 SGL Keyed: Not Supported 00:18:40.191 SGL Bit Bucket Descriptor: Not Supported 00:18:40.191 SGL Metadata Pointer: Not Supported 00:18:40.191 Oversized SGL: Not Supported 00:18:40.191 SGL Metadata Address: Not Supported 00:18:40.191 SGL Offset: Supported 00:18:40.191 Transport SGL Data Block: Not Supported 00:18:40.191 Replay Protected Memory Block: Not Supported 00:18:40.191 00:18:40.191 Firmware Slot Information 00:18:40.191 ========================= 00:18:40.191 Active slot: 0 00:18:40.191 00:18:40.191 Asymmetric Namespace Access 00:18:40.191 =========================== 00:18:40.191 Change Count : 0 00:18:40.191 Number of ANA Group Descriptors : 1 00:18:40.191 ANA Group Descriptor : 0 00:18:40.191 ANA Group ID : 1 00:18:40.191 Number of NSID Values : 1 00:18:40.191 Change Count : 0 00:18:40.191 ANA State : 1 00:18:40.191 Namespace Identifier : 1 00:18:40.191 00:18:40.191 Commands Supported and Effects 00:18:40.191 ============================== 00:18:40.191 Admin Commands 00:18:40.191 -------------- 00:18:40.191 Get Log Page (02h): Supported 00:18:40.191 Identify (06h): Supported 00:18:40.191 Abort (08h): Supported 00:18:40.191 Set Features (09h): Supported 00:18:40.191 Get Features (0Ah): Supported 00:18:40.191 Asynchronous Event Request (0Ch): Supported 00:18:40.191 Keep Alive (18h): Supported 00:18:40.191 I/O Commands 00:18:40.191 ------------ 00:18:40.191 Flush (00h): Supported 00:18:40.191 Write (01h): Supported LBA-Change 00:18:40.191 Read (02h): Supported 00:18:40.191 Write Zeroes (08h): Supported LBA-Change 00:18:40.191 Dataset Management (09h): Supported 00:18:40.191 00:18:40.191 Error Log 00:18:40.191 ========= 00:18:40.191 Entry: 0 00:18:40.191 Error Count: 0x3 00:18:40.191 Submission Queue Id: 0x0 00:18:40.191 Command Id: 0x5 00:18:40.191 Phase Bit: 0 00:18:40.191 Status Code: 0x2 00:18:40.191 Status Code Type: 0x0 00:18:40.191 Do Not Retry: 1 00:18:40.191 Error Location: 0x28 00:18:40.191 LBA: 0x0 00:18:40.191 Namespace: 0x0 00:18:40.191 Vendor Log Page: 0x0 00:18:40.191 ----------- 00:18:40.191 Entry: 1 00:18:40.191 Error Count: 0x2 00:18:40.191 Submission Queue Id: 0x0 00:18:40.191 Command Id: 0x5 00:18:40.191 Phase Bit: 0 00:18:40.191 Status Code: 0x2 00:18:40.191 Status Code Type: 0x0 00:18:40.191 Do Not Retry: 1 00:18:40.191 Error Location: 0x28 00:18:40.191 LBA: 0x0 00:18:40.191 Namespace: 0x0 00:18:40.191 Vendor Log Page: 0x0 00:18:40.191 ----------- 00:18:40.191 Entry: 2 00:18:40.191 Error Count: 0x1 00:18:40.191 Submission Queue Id: 0x0 00:18:40.191 Command Id: 0x4 00:18:40.191 Phase Bit: 0 00:18:40.191 Status Code: 0x2 00:18:40.191 Status Code Type: 0x0 00:18:40.191 Do Not Retry: 1 00:18:40.192 Error Location: 0x28 00:18:40.192 LBA: 0x0 00:18:40.192 Namespace: 0x0 00:18:40.192 Vendor Log Page: 0x0 00:18:40.192 00:18:40.192 Number of Queues 00:18:40.192 ================ 00:18:40.192 Number of I/O Submission Queues: 128 00:18:40.192 Number of I/O Completion Queues: 128 00:18:40.192 00:18:40.192 ZNS Specific Controller Data 00:18:40.192 ============================ 00:18:40.192 Zone Append Size Limit: 0 00:18:40.192 00:18:40.192 00:18:40.192 Active Namespaces 00:18:40.192 ================= 00:18:40.192 get_feature(0x05) failed 00:18:40.192 Namespace ID:1 00:18:40.192 Command Set Identifier: NVM (00h) 00:18:40.192 Deallocate: Supported 00:18:40.192 Deallocated/Unwritten Error: Not Supported 00:18:40.192 Deallocated Read Value: Unknown 00:18:40.192 Deallocate in Write Zeroes: Not Supported 00:18:40.192 Deallocated Guard Field: 0xFFFF 00:18:40.192 Flush: Supported 00:18:40.192 Reservation: Not Supported 00:18:40.192 Namespace Sharing Capabilities: Multiple Controllers 00:18:40.192 Size (in LBAs): 1310720 (5GiB) 00:18:40.192 Capacity (in LBAs): 1310720 (5GiB) 00:18:40.192 Utilization (in LBAs): 1310720 (5GiB) 00:18:40.192 UUID: 156e3793-0bc0-4d26-8e3a-aff97bb5eb87 00:18:40.192 Thin Provisioning: Not Supported 00:18:40.192 Per-NS Atomic Units: Yes 00:18:40.192 Atomic Boundary Size (Normal): 0 00:18:40.192 Atomic Boundary Size (PFail): 0 00:18:40.192 Atomic Boundary Offset: 0 00:18:40.192 NGUID/EUI64 Never Reused: No 00:18:40.192 ANA group ID: 1 00:18:40.192 Namespace Write Protected: No 00:18:40.192 Number of LBA Formats: 1 00:18:40.192 Current LBA Format: LBA Format #00 00:18:40.192 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:40.192 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.192 rmmod nvme_tcp 00:18:40.192 rmmod nvme_fabrics 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:40.192 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:40.193 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:40.193 12:25:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:40.193 12:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:41.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:41.127 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:41.127 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:41.127 00:18:41.127 real 0m2.808s 00:18:41.127 user 0m0.947s 00:18:41.127 sys 0m1.363s 00:18:41.127 12:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:41.127 12:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.127 ************************************ 00:18:41.127 END TEST nvmf_identify_kernel_target 00:18:41.127 ************************************ 00:18:41.127 12:25:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:41.127 12:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:41.127 12:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:41.127 12:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.127 ************************************ 00:18:41.127 START TEST nvmf_auth_host 00:18:41.127 ************************************ 00:18:41.127 12:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:41.386 * Looking for test storage... 00:18:41.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:41.386 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:41.387 Cannot find device "nvmf_tgt_br" 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.387 Cannot find device "nvmf_tgt_br2" 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:41.387 Cannot find device "nvmf_tgt_br" 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:41.387 Cannot find device "nvmf_tgt_br2" 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.387 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:41.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:18:41.645 00:18:41.645 --- 10.0.0.2 ping statistics --- 00:18:41.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.645 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:41.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:18:41.645 00:18:41.645 --- 10.0.0.3 ping statistics --- 00:18:41.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.645 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:41.645 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:41.645 00:18:41.645 --- 10.0.0.1 ping statistics --- 00:18:41.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.645 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=90954 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 90954 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 90954 ']' 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.646 12:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cd1202209f577605d23bd63c8595c656 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.17z 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cd1202209f577605d23bd63c8595c656 0 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cd1202209f577605d23bd63c8595c656 0 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cd1202209f577605d23bd63c8595c656 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.17z 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.17z 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.17z 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f90d8232a7ff3090c8ccd9211ec2c6d46251e1697add2411af190e484eb74c11 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4MO 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f90d8232a7ff3090c8ccd9211ec2c6d46251e1697add2411af190e484eb74c11 3 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f90d8232a7ff3090c8ccd9211ec2c6d46251e1697add2411af190e484eb74c11 3 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f90d8232a7ff3090c8ccd9211ec2c6d46251e1697add2411af190e484eb74c11 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4MO 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4MO 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4MO 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=64559a51959f628a97c625e3767a00cba97e58debe38c1dc 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.clv 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 64559a51959f628a97c625e3767a00cba97e58debe38c1dc 0 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 64559a51959f628a97c625e3767a00cba97e58debe38c1dc 0 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:43.019 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=64559a51959f628a97c625e3767a00cba97e58debe38c1dc 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.clv 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.clv 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.clv 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6b41e6ccb2695a9bd8fe7f4ee94fb312ef86809375ba59af 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wJX 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6b41e6ccb2695a9bd8fe7f4ee94fb312ef86809375ba59af 2 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6b41e6ccb2695a9bd8fe7f4ee94fb312ef86809375ba59af 2 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6b41e6ccb2695a9bd8fe7f4ee94fb312ef86809375ba59af 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wJX 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wJX 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.wJX 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=83d26dbe0640556794310b625c7ed93b 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CHN 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 83d26dbe0640556794310b625c7ed93b 1 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 83d26dbe0640556794310b625c7ed93b 1 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=83d26dbe0640556794310b625c7ed93b 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CHN 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CHN 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.CHN 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4d2149761e8a17bb890e98cdf2d45215 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.I2P 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4d2149761e8a17bb890e98cdf2d45215 1 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4d2149761e8a17bb890e98cdf2d45215 1 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4d2149761e8a17bb890e98cdf2d45215 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:43.020 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.I2P 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.I2P 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.I2P 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4d1e5d533632c42fd050ac355c93b81eeec0290e8cea6495 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ROY 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4d1e5d533632c42fd050ac355c93b81eeec0290e8cea6495 2 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4d1e5d533632c42fd050ac355c93b81eeec0290e8cea6495 2 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4d1e5d533632c42fd050ac355c93b81eeec0290e8cea6495 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:43.279 12:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ROY 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ROY 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ROY 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=88c88ea3ad02c55ac1933245c676e379 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.np0 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 88c88ea3ad02c55ac1933245c676e379 0 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 88c88ea3ad02c55ac1933245c676e379 0 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=88c88ea3ad02c55ac1933245c676e379 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.np0 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.np0 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.np0 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de9033dc57aa7e8bcd4ab114a13153a80bfcea52e059a0fffe2120046de6fc50 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sP4 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de9033dc57aa7e8bcd4ab114a13153a80bfcea52e059a0fffe2120046de6fc50 3 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de9033dc57aa7e8bcd4ab114a13153a80bfcea52e059a0fffe2120046de6fc50 3 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de9033dc57aa7e8bcd4ab114a13153a80bfcea52e059a0fffe2120046de6fc50 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sP4 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sP4 00:18:43.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.sP4 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 90954 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 90954 ']' 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.279 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.17z 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4MO ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4MO 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.clv 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.wJX ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wJX 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.CHN 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.I2P ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I2P 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ROY 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.np0 ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.np0 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.sP4 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.846 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:43.847 12:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:44.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:44.120 Waiting for block devices as requested 00:18:44.120 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:44.406 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:44.973 No valid GPT data, bailing 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:44.973 No valid GPT data, bailing 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:44.973 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:44.974 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:44.974 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:44.974 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:44.974 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:44.974 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:44.974 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:44.974 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:44.974 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:44.974 No valid GPT data, bailing 00:18:44.974 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:45.233 No valid GPT data, bailing 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -a 10.0.0.1 -t tcp -s 4420 00:18:45.233 00:18:45.233 Discovery Log Number of Records 2, Generation counter 2 00:18:45.233 =====Discovery Log Entry 0====== 00:18:45.233 trtype: tcp 00:18:45.233 adrfam: ipv4 00:18:45.233 subtype: current discovery subsystem 00:18:45.233 treq: not specified, sq flow control disable supported 00:18:45.233 portid: 1 00:18:45.233 trsvcid: 4420 00:18:45.233 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:45.233 traddr: 10.0.0.1 00:18:45.233 eflags: none 00:18:45.233 sectype: none 00:18:45.233 =====Discovery Log Entry 1====== 00:18:45.233 trtype: tcp 00:18:45.233 adrfam: ipv4 00:18:45.233 subtype: nvme subsystem 00:18:45.233 treq: not specified, sq flow control disable supported 00:18:45.233 portid: 1 00:18:45.233 trsvcid: 4420 00:18:45.233 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:45.233 traddr: 10.0.0.1 00:18:45.233 eflags: none 00:18:45.233 sectype: none 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:45.233 12:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:45.492 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:45.492 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:18:45.492 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:45.492 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:45.492 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:45.492 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.493 nvme0n1 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.493 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.751 nvme0n1 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.751 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.752 nvme0n1 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.752 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.010 nvme0n1 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.010 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.011 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.269 nvme0n1 00:18:46.269 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.269 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.269 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.269 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.269 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.269 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.269 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.269 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.269 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.269 12:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.269 nvme0n1 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.269 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:46.528 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.787 nvme0n1 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.787 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.045 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.046 nvme0n1 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.046 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.304 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.305 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.305 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.305 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.305 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.305 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.305 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.305 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.305 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.305 12:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.305 nvme0n1 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.305 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.563 nvme0n1 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.563 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.564 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.564 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.564 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.564 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.564 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.564 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.564 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.564 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:47.564 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.564 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.822 nvme0n1 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:47.822 12:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.400 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.691 nvme0n1 00:18:48.691 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.692 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.692 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.692 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.692 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.692 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.692 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.692 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.694 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.695 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.957 nvme0n1 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.957 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.215 nvme0n1 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.215 12:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.474 nvme0n1 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:49.474 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.475 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 nvme0n1 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:49.733 12:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.635 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.894 nvme0n1 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.894 12:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.461 nvme0n1 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.461 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.719 nvme0n1 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.719 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.025 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.283 nvme0n1 00:18:53.283 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.283 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.283 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.283 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.283 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.283 12:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.283 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.283 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.283 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.284 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.851 nvme0n1 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.851 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.852 12:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.417 nvme0n1 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.417 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.238 nvme0n1 00:18:55.238 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.238 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.238 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.238 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.238 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.238 12:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:55.238 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.239 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.172 nvme0n1 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.172 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.173 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.173 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.173 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.173 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.173 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.173 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.173 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:56.173 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.173 12:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.743 nvme0n1 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.743 12:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.319 nvme0n1 00:18:57.319 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.319 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.319 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.319 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.319 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.577 nvme0n1 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.577 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.835 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.836 nvme0n1 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.836 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.094 nvme0n1 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.094 nvme0n1 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.094 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.095 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.095 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.095 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.095 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.095 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.095 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.353 12:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.353 nvme0n1 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:58.353 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.354 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.611 nvme0n1 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:58.611 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.612 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.869 nvme0n1 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.869 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.870 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.127 nvme0n1 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.127 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:59.128 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.128 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.128 nvme0n1 00:18:59.128 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.128 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.128 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.128 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.128 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.128 12:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:59.385 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.386 nvme0n1 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.386 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.643 nvme0n1 00:18:59.643 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.643 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.643 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.643 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.643 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.643 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.643 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.643 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.643 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.643 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.901 nvme0n1 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.901 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.159 12:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.417 nvme0n1 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.417 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.675 nvme0n1 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.675 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.676 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.950 nvme0n1 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.950 12:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.208 nvme0n1 00:19:01.208 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.208 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.208 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.208 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.208 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.208 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:01.466 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.467 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.725 nvme0n1 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.725 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.984 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.242 nvme0n1 00:19:02.242 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.242 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.242 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.242 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.242 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.242 12:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.242 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.242 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.242 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.243 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.810 nvme0n1 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.810 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.811 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.069 nvme0n1 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.069 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.327 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.327 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.327 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.328 12:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.946 nvme0n1 00:19:03.946 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.947 12:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.522 nvme0n1 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.522 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.781 12:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.348 nvme0n1 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.348 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.286 nvme0n1 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.286 12:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.854 nvme0n1 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.854 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.114 nvme0n1 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.114 nvme0n1 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:07.114 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.373 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.374 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.374 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.374 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.374 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.374 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.374 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.374 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.374 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.374 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.374 12:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.374 nvme0n1 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.374 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.632 nvme0n1 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:07.632 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.633 nvme0n1 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.633 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.891 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.892 nvme0n1 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.892 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.150 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.151 nvme0n1 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.151 12:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.151 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.409 nvme0n1 00:19:08.409 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.409 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.409 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.409 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.409 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.409 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.409 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.409 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.409 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.410 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.674 nvme0n1 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:08.674 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.675 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.676 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.676 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.676 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.676 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.676 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.676 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.676 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.676 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:08.676 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.676 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.944 nvme0n1 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.944 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 nvme0n1 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.203 12:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.462 nvme0n1 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.463 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.722 nvme0n1 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.722 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.981 nvme0n1 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.981 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.240 nvme0n1 00:19:10.240 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.240 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.240 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.240 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.240 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:10.240 12:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:19:10.240 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.241 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 nvme0n1 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.808 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.809 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:10.809 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.809 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:10.809 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:10.809 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:10.809 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.809 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.809 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.067 nvme0n1 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:11.067 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.068 12:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.634 nvme0n1 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.634 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.635 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.894 nvme0n1 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.894 12:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.461 nvme0n1 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2QxMjAyMjA5ZjU3NzYwNWQyM2JkNjNjODU5NWM2NTYlaX0I: 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: ]] 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjkwZDgyMzJhN2ZmMzA5MGM4Y2NkOTIxMWVjMmM2ZDQ2MjUxZTE2OTdhZGQyNDExYWYxOTBlNDg0ZWI3NGMxMbSYsM4=: 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.461 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.028 nvme0n1 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.028 12:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.593 nvme0n1 00:19:13.593 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.593 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.593 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.593 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.593 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.593 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODNkMjZkYmUwNjQwNTU2Nzk0MzEwYjYyNWM3ZWQ5M2IzhRl8: 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: ]] 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQyMTQ5NzYxZThhMTdiYjg5MGU5OGNkZjJkNDUyMTU8Y3He: 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.852 12:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.417 nvme0n1 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQxZTVkNTMzNjMyYzQyZmQwNTBhYzM1NWM5M2I4MWVlZWMwMjkwZThjZWE2NDk1sXqw4g==: 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: ]] 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjODhlYTNhZDAyYzU1YWMxOTMzMjQ1YzY3NmUzNznE7exk: 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:14.417 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.418 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.982 nvme0n1 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGU5MDMzZGM1N2FhN2U4YmNkNGFiMTE0YTEzMTUzYTgwYmZjZWE1MmUwNTlhMGZmZmUyMTIwMDQ2ZGU2ZmM1MH4lL+A=: 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.982 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.983 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.983 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.983 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.983 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.983 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.983 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.983 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.983 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:14.983 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.983 12:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.918 nvme0n1 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ1NTlhNTE5NTlmNjI4YTk3YzYyNWUzNzY3YTAwY2JhOTdlNThkZWJlMzhjMWRjKnu0zQ==: 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: ]] 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmI0MWU2Y2NiMjY5NWE5YmQ4ZmU3ZjRlZTk0ZmIzMTJlZjg2ODA5Mzc1YmE1OWFmULmXwA==: 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.918 2024/07/25 12:26:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:15.918 request: 00:19:15.918 { 00:19:15.918 "method": "bdev_nvme_attach_controller", 00:19:15.918 "params": { 00:19:15.918 "name": "nvme0", 00:19:15.918 "trtype": "tcp", 00:19:15.918 "traddr": "10.0.0.1", 00:19:15.918 "adrfam": "ipv4", 00:19:15.918 "trsvcid": "4420", 00:19:15.918 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:15.918 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:15.918 "prchk_reftag": false, 00:19:15.918 "prchk_guard": false, 00:19:15.918 "hdgst": false, 00:19:15.918 "ddgst": false 00:19:15.918 } 00:19:15.918 } 00:19:15.918 Got JSON-RPC error response 00:19:15.918 GoRPCClient: error on JSON-RPC call 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.918 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.919 2024/07/25 12:26:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:15.919 request: 00:19:15.919 { 00:19:15.919 "method": "bdev_nvme_attach_controller", 00:19:15.919 "params": { 00:19:15.919 "name": "nvme0", 00:19:15.919 "trtype": "tcp", 00:19:15.919 "traddr": "10.0.0.1", 00:19:15.919 "adrfam": "ipv4", 00:19:15.919 "trsvcid": "4420", 00:19:15.919 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:15.919 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:15.919 "prchk_reftag": false, 00:19:15.919 "prchk_guard": false, 00:19:15.919 "hdgst": false, 00:19:15.919 "ddgst": false, 00:19:15.919 "dhchap_key": "key2" 00:19:15.919 } 00:19:15.919 } 00:19:15.919 Got JSON-RPC error response 00:19:15.919 GoRPCClient: error on JSON-RPC call 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.919 2024/07/25 12:26:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:15.919 request: 00:19:15.919 { 00:19:15.919 "method": "bdev_nvme_attach_controller", 00:19:15.919 "params": { 00:19:15.919 "name": "nvme0", 00:19:15.919 "trtype": "tcp", 00:19:15.919 "traddr": "10.0.0.1", 00:19:15.919 "adrfam": "ipv4", 00:19:15.919 "trsvcid": "4420", 00:19:15.919 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:15.919 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:15.919 "prchk_reftag": false, 00:19:15.919 "prchk_guard": false, 00:19:15.919 "hdgst": false, 00:19:15.919 "ddgst": false, 00:19:15.919 "dhchap_key": "key1", 00:19:15.919 "dhchap_ctrlr_key": "ckey2" 00:19:15.919 } 00:19:15.919 } 00:19:15.919 Got JSON-RPC error response 00:19:15.919 GoRPCClient: error on JSON-RPC call 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:15.919 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:15.919 rmmod nvme_tcp 00:19:15.919 rmmod nvme_fabrics 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 90954 ']' 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 90954 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 90954 ']' 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 90954 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90954 00:19:16.178 killing process with pid 90954 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90954' 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 90954 00:19:16.178 12:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 90954 00:19:16.178 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.178 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.178 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.178 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.178 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.178 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.178 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.178 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:16.435 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:17.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:17.258 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:17.258 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:17.258 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.17z /tmp/spdk.key-null.clv /tmp/spdk.key-sha256.CHN /tmp/spdk.key-sha384.ROY /tmp/spdk.key-sha512.sP4 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:17.258 12:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:17.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:17.516 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:17.516 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:17.774 ************************************ 00:19:17.774 END TEST nvmf_auth_host 00:19:17.774 ************************************ 00:19:17.774 00:19:17.774 real 0m36.469s 00:19:17.774 user 0m32.510s 00:19:17.774 sys 0m3.809s 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.774 ************************************ 00:19:17.774 START TEST nvmf_digest 00:19:17.774 ************************************ 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:17.774 * Looking for test storage... 00:19:17.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.774 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:17.775 Cannot find device "nvmf_tgt_br" 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.775 Cannot find device "nvmf_tgt_br2" 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:17.775 Cannot find device "nvmf_tgt_br" 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:17.775 Cannot find device "nvmf_tgt_br2" 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:19:17.775 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:18.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:18.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:18.033 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:18.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:19:18.292 00:19:18.292 --- 10.0.0.2 ping statistics --- 00:19:18.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.292 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:18.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:18.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:18.292 00:19:18.292 --- 10.0.0.3 ping statistics --- 00:19:18.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.292 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:18.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:18.292 00:19:18.292 --- 10.0.0.1 ping statistics --- 00:19:18.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.292 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:18.292 ************************************ 00:19:18.292 START TEST nvmf_digest_clean 00:19:18.292 ************************************ 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:18.292 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=92562 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 92562 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92562 ']' 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.293 12:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:18.293 [2024-07-25 12:26:33.009698] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:18.293 [2024-07-25 12:26:33.009813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.551 [2024-07-25 12:26:33.156993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.551 [2024-07-25 12:26:33.290323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.551 [2024-07-25 12:26:33.290403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.551 [2024-07-25 12:26:33.290416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.551 [2024-07-25 12:26:33.290425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.551 [2024-07-25 12:26:33.290432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.551 [2024-07-25 12:26:33.290459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.485 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.485 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:19:19.485 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:19.485 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.485 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:19.485 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:19.486 null0 00:19:19.486 [2024-07-25 12:26:34.206119] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.486 [2024-07-25 12:26:34.230267] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92612 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92612 /var/tmp/bperf.sock 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92612 ']' 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.486 12:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:19.486 [2024-07-25 12:26:34.296985] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:19.486 [2024-07-25 12:26:34.297382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92612 ] 00:19:19.744 [2024-07-25 12:26:34.440125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.744 [2024-07-25 12:26:34.576247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.694 12:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.694 12:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:19:20.694 12:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:20.694 12:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:20.694 12:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:20.953 12:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:20.953 12:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:21.213 nvme0n1 00:19:21.213 12:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:21.213 12:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:21.471 Running I/O for 2 seconds... 00:19:23.373 00:19:23.373 Latency(us) 00:19:23.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.373 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:23.373 nvme0n1 : 2.00 19441.85 75.94 0.00 0.00 6576.76 3291.69 15966.95 00:19:23.373 =================================================================================================================== 00:19:23.373 Total : 19441.85 75.94 0.00 0.00 6576.76 3291.69 15966.95 00:19:23.373 0 00:19:23.373 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:23.373 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:23.373 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:23.373 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:23.373 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:23.373 | select(.opcode=="crc32c") 00:19:23.373 | "\(.module_name) \(.executed)"' 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92612 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92612 ']' 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92612 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92612 00:19:23.632 killing process with pid 92612 00:19:23.632 Received shutdown signal, test time was about 2.000000 seconds 00:19:23.632 00:19:23.632 Latency(us) 00:19:23.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.632 =================================================================================================================== 00:19:23.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92612' 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92612 00:19:23.632 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92612 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92702 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92702 /var/tmp/bperf.sock 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92702 ']' 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:24.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.200 12:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:24.200 [2024-07-25 12:26:38.820237] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:24.200 [2024-07-25 12:26:38.820619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:19:24.200 Zero copy mechanism will not be used. 00:19:24.200 llocations --file-prefix=spdk_pid92702 ] 00:19:24.200 [2024-07-25 12:26:38.958515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.459 [2024-07-25 12:26:39.110453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.027 12:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.027 12:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:19:25.027 12:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:25.027 12:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:25.027 12:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:25.593 12:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:25.593 12:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:25.852 nvme0n1 00:19:25.852 12:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:25.852 12:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:25.852 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:25.852 Zero copy mechanism will not be used. 00:19:25.852 Running I/O for 2 seconds... 00:19:27.756 00:19:27.756 Latency(us) 00:19:27.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.756 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:27.756 nvme0n1 : 2.00 6487.35 810.92 0.00 0.00 2461.89 729.83 9711.24 00:19:27.756 =================================================================================================================== 00:19:27.756 Total : 6487.35 810.92 0.00 0.00 2461.89 729.83 9711.24 00:19:27.756 0 00:19:27.756 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:27.756 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:27.756 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:27.756 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:27.756 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:27.756 | select(.opcode=="crc32c") 00:19:27.756 | "\(.module_name) \(.executed)"' 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92702 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92702 ']' 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92702 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92702 00:19:28.323 killing process with pid 92702 00:19:28.323 Received shutdown signal, test time was about 2.000000 seconds 00:19:28.323 00:19:28.323 Latency(us) 00:19:28.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.323 =================================================================================================================== 00:19:28.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92702' 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92702 00:19:28.323 12:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92702 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92794 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92794 /var/tmp/bperf.sock 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92794 ']' 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.582 12:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:28.582 [2024-07-25 12:26:43.326696] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:28.582 [2024-07-25 12:26:43.326811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92794 ] 00:19:28.840 [2024-07-25 12:26:43.465820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.840 [2024-07-25 12:26:43.611301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.406 12:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.406 12:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:19:29.406 12:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:29.406 12:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:29.407 12:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:30.025 12:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:30.025 12:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:30.284 nvme0n1 00:19:30.284 12:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:30.284 12:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:30.284 Running I/O for 2 seconds... 00:19:32.812 00:19:32.812 Latency(us) 00:19:32.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.812 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:32.812 nvme0n1 : 2.00 21195.29 82.79 0.00 0.00 6032.25 2427.81 13166.78 00:19:32.812 =================================================================================================================== 00:19:32.812 Total : 21195.29 82.79 0.00 0.00 6032.25 2427.81 13166.78 00:19:32.812 0 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:32.812 | select(.opcode=="crc32c") 00:19:32.812 | "\(.module_name) \(.executed)"' 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92794 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92794 ']' 00:19:32.812 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92794 00:19:32.813 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:19:32.813 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.813 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92794 00:19:32.813 killing process with pid 92794 00:19:32.813 Received shutdown signal, test time was about 2.000000 seconds 00:19:32.813 00:19:32.813 Latency(us) 00:19:32.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.813 =================================================================================================================== 00:19:32.813 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.813 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:32.813 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:32.813 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92794' 00:19:32.813 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92794 00:19:32.813 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92794 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92884 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92884 /var/tmp/bperf.sock 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92884 ']' 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:33.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.071 12:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:33.071 [2024-07-25 12:26:47.847149] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:33.071 [2024-07-25 12:26:47.847606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92884 ] 00:19:33.071 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:33.071 Zero copy mechanism will not be used. 00:19:33.329 [2024-07-25 12:26:47.995174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.329 [2024-07-25 12:26:48.146039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.284 12:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.284 12:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:19:34.284 12:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:34.284 12:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:34.284 12:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:34.541 12:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:34.541 12:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:34.800 nvme0n1 00:19:34.800 12:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:34.800 12:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:35.058 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:35.058 Zero copy mechanism will not be used. 00:19:35.058 Running I/O for 2 seconds... 00:19:36.958 00:19:36.958 Latency(us) 00:19:36.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.958 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:36.958 nvme0n1 : 2.00 6082.93 760.37 0.00 0.00 2624.17 2174.60 8638.84 00:19:36.958 =================================================================================================================== 00:19:36.958 Total : 6082.93 760.37 0.00 0.00 2624.17 2174.60 8638.84 00:19:36.958 0 00:19:36.958 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:36.958 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:36.958 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:36.958 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:36.958 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:36.958 | select(.opcode=="crc32c") 00:19:36.958 | "\(.module_name) \(.executed)"' 00:19:37.217 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:37.217 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:37.217 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:37.217 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:37.217 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92884 00:19:37.217 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92884 ']' 00:19:37.217 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92884 00:19:37.217 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:19:37.217 12:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.217 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92884 00:19:37.217 killing process with pid 92884 00:19:37.217 Received shutdown signal, test time was about 2.000000 seconds 00:19:37.217 00:19:37.217 Latency(us) 00:19:37.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.217 =================================================================================================================== 00:19:37.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.217 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:37.217 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:37.217 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92884' 00:19:37.217 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92884 00:19:37.217 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92884 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 92562 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92562 ']' 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92562 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92562 00:19:37.784 killing process with pid 92562 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92562' 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92562 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92562 00:19:37.784 00:19:37.784 real 0m19.708s 00:19:37.784 user 0m36.553s 00:19:37.784 sys 0m5.624s 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:37.784 ************************************ 00:19:37.784 END TEST nvmf_digest_clean 00:19:37.784 ************************************ 00:19:37.784 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:38.043 ************************************ 00:19:38.043 START TEST nvmf_digest_error 00:19:38.043 ************************************ 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93003 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93003 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93003 ']' 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.043 12:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:38.043 [2024-07-25 12:26:52.771563] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:38.043 [2024-07-25 12:26:52.771699] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.301 [2024-07-25 12:26:52.915387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.301 [2024-07-25 12:26:53.043041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.301 [2024-07-25 12:26:53.043102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.301 [2024-07-25 12:26:53.043115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.301 [2024-07-25 12:26:53.043124] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.301 [2024-07-25 12:26:53.043132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.301 [2024-07-25 12:26:53.043167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.235 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.235 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:39.236 [2024-07-25 12:26:53.823895] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:39.236 null0 00:19:39.236 [2024-07-25 12:26:53.947743] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.236 [2024-07-25 12:26:53.971914] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93047 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93047 /var/tmp/bperf.sock 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93047 ']' 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.236 12:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:39.236 [2024-07-25 12:26:54.027591] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:39.236 [2024-07-25 12:26:54.027697] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93047 ] 00:19:39.495 [2024-07-25 12:26:54.162065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.495 [2024-07-25 12:26:54.317824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.430 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.430 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:19:40.430 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:40.430 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:40.688 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:40.688 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.688 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:40.688 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.688 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:40.688 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:40.947 nvme0n1 00:19:40.947 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:40.947 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.947 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:40.947 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.947 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:40.947 12:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:40.947 Running I/O for 2 seconds... 00:19:41.206 [2024-07-25 12:26:55.827819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.206 [2024-07-25 12:26:55.828579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.206 [2024-07-25 12:26:55.828750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.206 [2024-07-25 12:26:55.843679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.206 [2024-07-25 12:26:55.843835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.206 [2024-07-25 12:26:55.843958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.206 [2024-07-25 12:26:55.858951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.206 [2024-07-25 12:26:55.859086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.206 [2024-07-25 12:26:55.859109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.206 [2024-07-25 12:26:55.874424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.206 [2024-07-25 12:26:55.874504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.206 [2024-07-25 12:26:55.874519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.206 [2024-07-25 12:26:55.888712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.206 [2024-07-25 12:26:55.888755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.206 [2024-07-25 12:26:55.888769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.206 [2024-07-25 12:26:55.903515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.206 [2024-07-25 12:26:55.903604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.206 [2024-07-25 12:26:55.903631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.206 [2024-07-25 12:26:55.916545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.206 [2024-07-25 12:26:55.916597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.206 [2024-07-25 12:26:55.916619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.206 [2024-07-25 12:26:55.929338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.206 [2024-07-25 12:26:55.929392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.206 [2024-07-25 12:26:55.929406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.206 [2024-07-25 12:26:55.945287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.206 [2024-07-25 12:26:55.945337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.206 [2024-07-25 12:26:55.945352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.207 [2024-07-25 12:26:55.959762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.207 [2024-07-25 12:26:55.959829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.207 [2024-07-25 12:26:55.959851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.207 [2024-07-25 12:26:55.972951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.207 [2024-07-25 12:26:55.973002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.207 [2024-07-25 12:26:55.973024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.207 [2024-07-25 12:26:55.985518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.207 [2024-07-25 12:26:55.985574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.207 [2024-07-25 12:26:55.985600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.207 [2024-07-25 12:26:55.999918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.207 [2024-07-25 12:26:55.999967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.207 [2024-07-25 12:26:55.999989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.207 [2024-07-25 12:26:56.014521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.207 [2024-07-25 12:26:56.014598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.207 [2024-07-25 12:26:56.014612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.207 [2024-07-25 12:26:56.031082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.207 [2024-07-25 12:26:56.031135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.207 [2024-07-25 12:26:56.031156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.207 [2024-07-25 12:26:56.047719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.207 [2024-07-25 12:26:56.047759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.207 [2024-07-25 12:26:56.047790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.207 [2024-07-25 12:26:56.059659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.207 [2024-07-25 12:26:56.059725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.207 [2024-07-25 12:26:56.059738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.075259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.075320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.075343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.090345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.090395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.090409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.107110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.107148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.107162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.119448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.119543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.119573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.134328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.134382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.134396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.150257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.150311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.150327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.166406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.166470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.166499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.181470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.181510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.181525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.195805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.195853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.195866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.212611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.212652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.212680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.228603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.228654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.228669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.242408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.242473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.242492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.258858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.258908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.258922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.273683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.273732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.273757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.287820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.287866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.287890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.302374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.302431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.302453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.466 [2024-07-25 12:26:56.316714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.466 [2024-07-25 12:26:56.316755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-07-25 12:26:56.316770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.731 [2024-07-25 12:26:56.331732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.731 [2024-07-25 12:26:56.331782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.731 [2024-07-25 12:26:56.331796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.731 [2024-07-25 12:26:56.346152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.731 [2024-07-25 12:26:56.346192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.731 [2024-07-25 12:26:56.346206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.731 [2024-07-25 12:26:56.361855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.731 [2024-07-25 12:26:56.361896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.731 [2024-07-25 12:26:56.361911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.731 [2024-07-25 12:26:56.376821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.731 [2024-07-25 12:26:56.376861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.731 [2024-07-25 12:26:56.376875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.731 [2024-07-25 12:26:56.391868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.731 [2024-07-25 12:26:56.391907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.391929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.406812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.406867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.406885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.421006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.421046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.421067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.435674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.435724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.435739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.451983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.452050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.452069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.468535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.468574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.468587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.483053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.483095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.483110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.498742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.498795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.498821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.512985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.513056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.513071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.527223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.527277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.527332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.542461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.542515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.542529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.558397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.558475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.558489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.574130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.574170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.574184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:41.732 [2024-07-25 12:26:56.586387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:41.732 [2024-07-25 12:26:56.586453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.732 [2024-07-25 12:26:56.586474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.010 [2024-07-25 12:26:56.600563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.010 [2024-07-25 12:26:56.600612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.010 [2024-07-25 12:26:56.600626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.010 [2024-07-25 12:26:56.613073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.010 [2024-07-25 12:26:56.613112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.010 [2024-07-25 12:26:56.613126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.010 [2024-07-25 12:26:56.628051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.628104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.628128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.641880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.641933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.641955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.655791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.655845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.655865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.670493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.670544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.670565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.685444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.685495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.685510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.698124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.698173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.698201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.713534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.713580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.713604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.727286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.727353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.727367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.742188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.742250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.742273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.755119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.755184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.755198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.770528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.770563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.770577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.787047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.787097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.787112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.801749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.801803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.801842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.816094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.816160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.816182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.829957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.830005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.830028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.841630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.841679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.841697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.858524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.858603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.858617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.011 [2024-07-25 12:26:56.873045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.011 [2024-07-25 12:26:56.873094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.011 [2024-07-25 12:26:56.873123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.270 [2024-07-25 12:26:56.893267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.270 [2024-07-25 12:26:56.893438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.270 [2024-07-25 12:26:56.893473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.270 [2024-07-25 12:26:56.908733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.270 [2024-07-25 12:26:56.908802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.270 [2024-07-25 12:26:56.908821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.270 [2024-07-25 12:26:56.925093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.270 [2024-07-25 12:26:56.925178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.270 [2024-07-25 12:26:56.925206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.270 [2024-07-25 12:26:56.939567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.270 [2024-07-25 12:26:56.939629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.270 [2024-07-25 12:26:56.939646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.270 [2024-07-25 12:26:56.956081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.270 [2024-07-25 12:26:56.956142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.270 [2024-07-25 12:26:56.956170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.270 [2024-07-25 12:26:56.971025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.270 [2024-07-25 12:26:56.971091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.270 [2024-07-25 12:26:56.971109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.270 [2024-07-25 12:26:56.986845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:56.986907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:56.986937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.271 [2024-07-25 12:26:57.001874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:57.001937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:57.001964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.271 [2024-07-25 12:26:57.016959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:57.017014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:57.017032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.271 [2024-07-25 12:26:57.031987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:57.032051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:57.032072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.271 [2024-07-25 12:26:57.047166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:57.047230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:57.047253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.271 [2024-07-25 12:26:57.061801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:57.061883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:57.061913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.271 [2024-07-25 12:26:57.075418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:57.075475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:57.075490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.271 [2024-07-25 12:26:57.087863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:57.087958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:57.087972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.271 [2024-07-25 12:26:57.100495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:57.100550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:57.100579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.271 [2024-07-25 12:26:57.117451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:57.117506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:57.117535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.271 [2024-07-25 12:26:57.131850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.271 [2024-07-25 12:26:57.131924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.271 [2024-07-25 12:26:57.131944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.145001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.145065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.145079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.160416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.160472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.160502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.171712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.171799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.171829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.186393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.186453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.186483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.201470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.201516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.201530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.214842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.214884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.214915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.228757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.228802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.228817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.244102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.244177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.244207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.258889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.258948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.258978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.271454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.271509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.271538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.285757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.285831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.285846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.299196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.299254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.299284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.313137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.313194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.313224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.325262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.325348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.325363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.339582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.339653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.339667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.355750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.355807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.355837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.370063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.370117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.370132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.530 [2024-07-25 12:26:57.383319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.530 [2024-07-25 12:26:57.383389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.530 [2024-07-25 12:26:57.383407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.789 [2024-07-25 12:26:57.396494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.789 [2024-07-25 12:26:57.396552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.789 [2024-07-25 12:26:57.396583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.789 [2024-07-25 12:26:57.411429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.789 [2024-07-25 12:26:57.411487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.789 [2024-07-25 12:26:57.411501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.789 [2024-07-25 12:26:57.424217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.789 [2024-07-25 12:26:57.424289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.789 [2024-07-25 12:26:57.424336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.789 [2024-07-25 12:26:57.439560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.789 [2024-07-25 12:26:57.439603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.789 [2024-07-25 12:26:57.439618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.789 [2024-07-25 12:26:57.454803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.789 [2024-07-25 12:26:57.454876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.789 [2024-07-25 12:26:57.454906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.789 [2024-07-25 12:26:57.467663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.789 [2024-07-25 12:26:57.467715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.789 [2024-07-25 12:26:57.467729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.789 [2024-07-25 12:26:57.481973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.789 [2024-07-25 12:26:57.482015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.789 [2024-07-25 12:26:57.482045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.789 [2024-07-25 12:26:57.496310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.496363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.496393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.511196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.511239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.511269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.524891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.524944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.524958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.539716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.539773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.539802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.552551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.552591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.552619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.567041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.567083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.567112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.582141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.582199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.582213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.594469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.594528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.594542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.609603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.609677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.609706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.626027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.626083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.626113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.640221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.640320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.640335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.790 [2024-07-25 12:26:57.652498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:42.790 [2024-07-25 12:26:57.652553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.790 [2024-07-25 12:26:57.652582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.666517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.666589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.666603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.679461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.679518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.679533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.694714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.694771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.694820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.710259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.710341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.710371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.726533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.726594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.726625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.739263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.739331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.739363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.754061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.754133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.754164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.767910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.767954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.767968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.781536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.781584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.781598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.798419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.798477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.798491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 [2024-07-25 12:26:57.810618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c4e30) 00:19:43.049 [2024-07-25 12:26:57.810676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.049 [2024-07-25 12:26:57.810706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.049 00:19:43.049 Latency(us) 00:19:43.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.049 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:43.049 nvme0n1 : 2.01 17449.43 68.16 0.00 0.00 7325.52 3783.21 22758.87 00:19:43.050 =================================================================================================================== 00:19:43.050 Total : 17449.43 68.16 0.00 0.00 7325.52 3783.21 22758.87 00:19:43.050 0 00:19:43.050 12:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:43.050 12:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:43.050 12:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:43.050 12:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:43.050 | .driver_specific 00:19:43.050 | .nvme_error 00:19:43.050 | .status_code 00:19:43.050 | .command_transient_transport_error' 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93047 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93047 ']' 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93047 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93047 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:43.308 killing process with pid 93047 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93047' 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93047 00:19:43.308 Received shutdown signal, test time was about 2.000000 seconds 00:19:43.308 00:19:43.308 Latency(us) 00:19:43.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.308 =================================================================================================================== 00:19:43.308 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.308 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93047 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93143 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93143 /var/tmp/bperf.sock 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93143 ']' 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.566 12:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:43.824 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:43.824 Zero copy mechanism will not be used. 00:19:43.824 [2024-07-25 12:26:58.449733] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:43.824 [2024-07-25 12:26:58.449861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93143 ] 00:19:43.824 [2024-07-25 12:26:58.581854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.082 [2024-07-25 12:26:58.737827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.648 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.648 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:19:44.648 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:44.648 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:44.906 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:44.906 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.906 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:44.906 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.906 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:44.906 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:45.196 nvme0n1 00:19:45.196 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:45.196 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.196 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:45.196 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.196 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:45.196 12:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:45.456 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:45.456 Zero copy mechanism will not be used. 00:19:45.456 Running I/O for 2 seconds... 00:19:45.456 [2024-07-25 12:27:00.099830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.099887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.099918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.104835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.104882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.104898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.109970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.110016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.110031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.115602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.115660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.115690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.120643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.120708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.120742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.123868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.123926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.123956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.128893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.128938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.128953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.134531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.134601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.134630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.140095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.140155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.140187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.143701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.143755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.143785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.148089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.148147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.148178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.152217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.152276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.152291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.156564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.156623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.156665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.160197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.160254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.160269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.165162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.165221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.456 [2024-07-25 12:27:00.165235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.456 [2024-07-25 12:27:00.169440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.456 [2024-07-25 12:27:00.169481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.169511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.173445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.173503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.173534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.177705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.177763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.177777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.182213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.182285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.182311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.185780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.185854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.185869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.190876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.190940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.190955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.195382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.195441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.195455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.198853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.198913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.198928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.202919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.202963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.202977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.207418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.207475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.207505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.211216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.211261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.211276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.215283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.215369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.215383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.219509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.219567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.219581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.223003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.223063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.223077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.227936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.227995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.228026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.231166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.231225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.231239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.235290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.235345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.235361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.240602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.240661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.240701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.245630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.245712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.245744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.249648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.249691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.249705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.254129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.254188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.254218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.258593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.258665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.258695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.262101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.262155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.262169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.267019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.267106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.267136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.270707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.270778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.270809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.274497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.457 [2024-07-25 12:27:00.274537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.457 [2024-07-25 12:27:00.274552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.457 [2024-07-25 12:27:00.278514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.458 [2024-07-25 12:27:00.278555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.458 [2024-07-25 12:27:00.278571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.458 [2024-07-25 12:27:00.282997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.458 [2024-07-25 12:27:00.283055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.458 [2024-07-25 12:27:00.283070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.458 [2024-07-25 12:27:00.287111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.458 [2024-07-25 12:27:00.287181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.458 [2024-07-25 12:27:00.287211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.458 [2024-07-25 12:27:00.291283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.458 [2024-07-25 12:27:00.291366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.458 [2024-07-25 12:27:00.291381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.458 [2024-07-25 12:27:00.295609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.458 [2024-07-25 12:27:00.295704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.458 [2024-07-25 12:27:00.295734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.458 [2024-07-25 12:27:00.300323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.458 [2024-07-25 12:27:00.300399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.458 [2024-07-25 12:27:00.300415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.458 [2024-07-25 12:27:00.304280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.458 [2024-07-25 12:27:00.304349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.458 [2024-07-25 12:27:00.304395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.458 [2024-07-25 12:27:00.307809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.458 [2024-07-25 12:27:00.307868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.458 [2024-07-25 12:27:00.307882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.458 [2024-07-25 12:27:00.312334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.458 [2024-07-25 12:27:00.312417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.458 [2024-07-25 12:27:00.312433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.458 [2024-07-25 12:27:00.317277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.458 [2024-07-25 12:27:00.317350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.458 [2024-07-25 12:27:00.317396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.320998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.321041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.321055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.325693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.325751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.325782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.329915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.329972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.330003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.334419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.334483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.334497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.338213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.338272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.338287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.342408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.342467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.342488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.346592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.346653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.346683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.350185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.350243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.350287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.354247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.354321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.354353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.359033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.359111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.359142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.362634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.362702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.362717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.366947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.366991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.367008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.371168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.371263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.371278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.375559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.375616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.375646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.717 [2024-07-25 12:27:00.380415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.717 [2024-07-25 12:27:00.380473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-25 12:27:00.380488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.385310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.385359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.385373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.388460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.388500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.388515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.393515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.393564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.393578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.397916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.397959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.397974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.401399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.401455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.401485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.405900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.405957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.405987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.410337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.410411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.410441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.414243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.414328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.414343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.418976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.419037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.419051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.423269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.423353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.423369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.427348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.427428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.427442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.431469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.431512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.431526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.435622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.435682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.435697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.439473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.439515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.439530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.443876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.443952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.443966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.448747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.448789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.448804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.451507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.451547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.451561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.456477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.456520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.456535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.459974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.460014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.460028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.464571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.464628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.464658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.469434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.469494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.469508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.472968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.473011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.473026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.477983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.478055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.478086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.482939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.482997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.483027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.486101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.486154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.486183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.490583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.490640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.490669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.494942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.494986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.495016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.498399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.498454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.498484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.503061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.503118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.503147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.508037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.508093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.508122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.512943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.512987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.513002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.517391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.517444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.517474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.520605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.520674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.520733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.525251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.525332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.525347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.530255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.530336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.530351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.533845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.533899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.533928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.537944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.537987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.538016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.542504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.542563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.542578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.545907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.545966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.545996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.551265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.551320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.551335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.556034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.556090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.556120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.560077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.560116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.560145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.564279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.564346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.564361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.568770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.568812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.568827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.573445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.573500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.573514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.718 [2024-07-25 12:27:00.578491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.718 [2024-07-25 12:27:00.578536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-25 12:27:00.578550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.977 [2024-07-25 12:27:00.581350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.977 [2024-07-25 12:27:00.581421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.977 [2024-07-25 12:27:00.581451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.977 [2024-07-25 12:27:00.586694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.977 [2024-07-25 12:27:00.586769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.977 [2024-07-25 12:27:00.586784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.977 [2024-07-25 12:27:00.590149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.977 [2024-07-25 12:27:00.590193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.977 [2024-07-25 12:27:00.590222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.594810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.594883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.594912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.599596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.599667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.599697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.603081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.603123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.603137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.607420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.607460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.607491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.612904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.612947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.612961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.616626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.616682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.616707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.620984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.621028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.621042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.625957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.626015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.626044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.630365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.630430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.630460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.634435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.634475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.634504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.638427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.638479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.638494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.642257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.642312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.642327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.646391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.646442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.646471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.650868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.650924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.650954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.655043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.655097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.655127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.658834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.658908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.658923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.663141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.663210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.663239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.668027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.668071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.668085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.671961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.672002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.672017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.676257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.676325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.676341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.680343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.680409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.680438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.684210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.684250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.684280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.687700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.687778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.687807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.692426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.692485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.692499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.695907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.695965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.695995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.700174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.700232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.700262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.703946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.704020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.704034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.707594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.707636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.707651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.712432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.712476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.712505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.717234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.717321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.717338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.720498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.720554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.720567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.978 [2024-07-25 12:27:00.725028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.978 [2024-07-25 12:27:00.725118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.978 [2024-07-25 12:27:00.725148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.729780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.729869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.729913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.733422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.733496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.733526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.738343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.738399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.738428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.743070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.743127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.743157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.746495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.746552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.746583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.750506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.750561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.750590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.755330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.755397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.755428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.759515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.759557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.759572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.762955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.763009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.763038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.767669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.767742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.767771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.772087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.772143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.772173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.776131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.776186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.776215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.780631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.780712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.780728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.784796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.784837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.784862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.788561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.788615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.788644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.792280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.792360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.792391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.796307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.796382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.796414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.800774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.800817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.800832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.804649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.804745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.804761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.809154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.809212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.809227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.814462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.814507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.814537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.818816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.818873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.818903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.822058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.822118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.822132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.826889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.826962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.826992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.830588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.830663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.830693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.834662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.834723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.979 [2024-07-25 12:27:00.834737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.979 [2024-07-25 12:27:00.838977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:45.979 [2024-07-25 12:27:00.839020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.980 [2024-07-25 12:27:00.839050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.842143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.842202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.842217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.845815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.845873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.845904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.849806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.849849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.849879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.853819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.853879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.853909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.858214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.858271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.858301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.861305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.861362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.861377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.865624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.865688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.865702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.869574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.869633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.869664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.874117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.874175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.874221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.878392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.878450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.878480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.881836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.881878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.881908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.886112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.886170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.886199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.890855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.890913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.890942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.894211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.894269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.894298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.898850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.898909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.898939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.903210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.903269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.903299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.906803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.906859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.906888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.911321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.911374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.911405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.916195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.916236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.916266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.920013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.920069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.920115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.924067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.924125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.924154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.240 [2024-07-25 12:27:00.928895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.240 [2024-07-25 12:27:00.928939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.240 [2024-07-25 12:27:00.928954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.933543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.933601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.933616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.936451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.936505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.936535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.940962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.941005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.941020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.945553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.945609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.945638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.948839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.948882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.948897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.953289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.953354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.953384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.958427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.958499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.958529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.961928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.961996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.962026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.966696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.966773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.966788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.970152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.970211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.970225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.973940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.973997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.974012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.977915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.978003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.978018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.982446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.982518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.982533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.986807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.986879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.986908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.990269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.990337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.990370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:00.995197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:00.995256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:00.995270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.000191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:01.000251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:01.000288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.004800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:01.004845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:01.004859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.007625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:01.007697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:01.007727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.011917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:01.011973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:01.012018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.016515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:01.016559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:01.016573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.020993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:01.021070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:01.021085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.025539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:01.025599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:01.025613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.029461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:01.029521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:01.029536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.033341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:01.033395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:01.033409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.037247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.241 [2024-07-25 12:27:01.037346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.241 [2024-07-25 12:27:01.037362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.241 [2024-07-25 12:27:01.041829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.041885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.041915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.046249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.046307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.046322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.050180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.050220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.050250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.054179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.054237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.054267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.058154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.058215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.058229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.063004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.063059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.063098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.068721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.068768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.068783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.072452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.072494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.072508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.076620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.076677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.076735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.081760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.081833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.081849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.086145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.086187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.086216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.089358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.089402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.089416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.094258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.094322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.094337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.098603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.098680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.098694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.242 [2024-07-25 12:27:01.102450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.242 [2024-07-25 12:27:01.102563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.242 [2024-07-25 12:27:01.102593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.501 [2024-07-25 12:27:01.107193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.501 [2024-07-25 12:27:01.107234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.501 [2024-07-25 12:27:01.107263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.501 [2024-07-25 12:27:01.111967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.501 [2024-07-25 12:27:01.112038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.501 [2024-07-25 12:27:01.112068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.501 [2024-07-25 12:27:01.115247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.501 [2024-07-25 12:27:01.115348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.501 [2024-07-25 12:27:01.115364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.501 [2024-07-25 12:27:01.119078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.501 [2024-07-25 12:27:01.119137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.119167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.123357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.123440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.123471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.127741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.127782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.127826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.132395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.132448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.132493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.136774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.136818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.136833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.140447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.140501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.140530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.144917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.144962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.144976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.149083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.149161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.149176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.153039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.153083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.153098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.157373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.157415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.157428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.161153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.161212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.161226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.165580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.165656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.165686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.169834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.169873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.169902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.174322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.174376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.174391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.178943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.178985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.179019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.182867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.182922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.182950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.187174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.187232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.187261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.192264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.192316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.192348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.197434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.197478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.197493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.201004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.502 [2024-07-25 12:27:01.201046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.502 [2024-07-25 12:27:01.201061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.502 [2024-07-25 12:27:01.205439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.205482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.205497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.209918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.210031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.210060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.214580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.214653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.214684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.219454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.219498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.219513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.223297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.223362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.223376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.227834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.227898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.227927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.232107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.232147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.232177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.235960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.236010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.236039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.240844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.240888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.240903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.244100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.244139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.244167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.249127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.249183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.249213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.252926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.252971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.252986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.257819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.257878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.257893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.261543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.261586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.261616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.266190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.266232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.266263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.270369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.270437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.270468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.275147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.275188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.275216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.278762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.278804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.278834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.283051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.283098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.283128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.503 [2024-07-25 12:27:01.288383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.503 [2024-07-25 12:27:01.288439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.503 [2024-07-25 12:27:01.288454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.291900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.291955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.291969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.296749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.296793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.296818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.300505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.300546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.300560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.304666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.304716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.304732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.309480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.309522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.309536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.314159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.314216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.314245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.318947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.318995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.319024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.322174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.322215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.322260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.326822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.326864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.326910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.331285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.331370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.331401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.335197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.335256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.335291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.338917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.338972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.339002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.343936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.343992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.344025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.348364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.348418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.348433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.352716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.352759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.352774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.356408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.356451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.356465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.360508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.360550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.360565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.504 [2024-07-25 12:27:01.363714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.504 [2024-07-25 12:27:01.363754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.504 [2024-07-25 12:27:01.363783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.369426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.369472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.369487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.374057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.374101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.374147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.378741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.378823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.378836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.382935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.382993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.383021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.387232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.387305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.387321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.390663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.390706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.390743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.395198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.395255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.395270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.400433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.400476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.400491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.405596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.405641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.405655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.408485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.408524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.408538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.413987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.414029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.414058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.419295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.419357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.419372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.423947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.423991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.424005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.426958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.427003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.427017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.431186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.431229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.431276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.435794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.764 [2024-07-25 12:27:01.435836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.764 [2024-07-25 12:27:01.435865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.764 [2024-07-25 12:27:01.440113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.440186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.440216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.444030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.444074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.444088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.447870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.447912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.447927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.452106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.452150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.452165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.456597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.456641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.456655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.460564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.460607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.460621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.464412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.464455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.464470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.469178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.469250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.469264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.473013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.473057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.473071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.476477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.476516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.476546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.480803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.480845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.480860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.485005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.485047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.485061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.489440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.489481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.489512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.494372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.494415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.494429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.498325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.498378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.498393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.502794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.502844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.502875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.507832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.507892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.507918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.511220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.511261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.511275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.515585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.515628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.515643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.520088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.520133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.520147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.523681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.523724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.523739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.528246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.528289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.528317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.532349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.532409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.532423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.536866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.536911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.536926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.540905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.540950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.540964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.544623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.544664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.765 [2024-07-25 12:27:01.544678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.765 [2024-07-25 12:27:01.548653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.765 [2024-07-25 12:27:01.548702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.548718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.552599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.552656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.552696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.556905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.556949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.556963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.561086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.561126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.561140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.564640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.564679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.564731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.568957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.569001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.569016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.573494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.573536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.573565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.578182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.578226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.578240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.582264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.582350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.582381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.587031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.587074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.587088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.590753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.590794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.590824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.595075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.595116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.595146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.599252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.599330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.599346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.602817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.602859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.602889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.607421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.607464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.607479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.611796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.611851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.611881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.615479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.615527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.615541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.619837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.619879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.619908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.623832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.623875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.623905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:46.766 [2024-07-25 12:27:01.627494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:46.766 [2024-07-25 12:27:01.627536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.766 [2024-07-25 12:27:01.627551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.631851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.631895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.631909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.635993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.636051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.636065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.640093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.640134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.640165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.644300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.644354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.644369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.649452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.649495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.649509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.653069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.653109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.653139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.657768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.657821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.657835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.662804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.662849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.662863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.667780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.667855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.667885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.670929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.670968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.670998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.675711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.675753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.675800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.680640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.680693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.680709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.684830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.684872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.684887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.688732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.688782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.688797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.693151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.693193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.693218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.697450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.697490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.697519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.701489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.027 [2024-07-25 12:27:01.701533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.027 [2024-07-25 12:27:01.701547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.027 [2024-07-25 12:27:01.705395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.705438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.705452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.708997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.709040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.709054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.712987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.713058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.713088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.716666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.716730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.716744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.720491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.720532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.720546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.725579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.725637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.725682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.731025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.731071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.731086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.736003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.736061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.736091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.739247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.739288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.739315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.743660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.743709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.743754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.748499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.748542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.748556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.752777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.752819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.752833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.756117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.756156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.756170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.759329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.759387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.759403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.763226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.763265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.763278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.767217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.767262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.767276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.771747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.771790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.771805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.775660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.775700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.775713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.780206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.780249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.780263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.784069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.784111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.784125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.787349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.787389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.787403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.791463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.791506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.791520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.795749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.795790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.795820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.799552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.799594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.799608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.802954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.802998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.803027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.807787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.807833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.028 [2024-07-25 12:27:01.807863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.028 [2024-07-25 12:27:01.812436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.028 [2024-07-25 12:27:01.812479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.812493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.815830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.815871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.815885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.820674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.820750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.820765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.826162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.826237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.826252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.831445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.831488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.831502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.834467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.834509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.834523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.839703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.839748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.839763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.843391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.843441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.843455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.846906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.846949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.846963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.850863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.850921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.850951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.854897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.854939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.854953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.858851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.858910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.858925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.863526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.863568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.863583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.867432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.867485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.867505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.871301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.871357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.871373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.875688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.875744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.875773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.880048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.880106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.880137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.884141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.884198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.884227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.029 [2024-07-25 12:27:01.887868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.029 [2024-07-25 12:27:01.887942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.029 [2024-07-25 12:27:01.887972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.288 [2024-07-25 12:27:01.893070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.288 [2024-07-25 12:27:01.893143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.288 [2024-07-25 12:27:01.893173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.288 [2024-07-25 12:27:01.897853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.897913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.897943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.901139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.901196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.901226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.905823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.905881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.905896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.910701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.910744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.910759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.913913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.913971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.914001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.918216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.918319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.918334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.922235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.922306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.922322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.926107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.926151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.926165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.930430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.930474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.930488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.934658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.934699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.934713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.938653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.938702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.938716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.943153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.943210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.943224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.947880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.947939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.947953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.951223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.951270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.951284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.955768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.955811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.955826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.960137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.960179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.960194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.963520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.963562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.963576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.967070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.967142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.967173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.971923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.971981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.972012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.975379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.975423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.975438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.980119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.980177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.980207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.985284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.985339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.985355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.990668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.990726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.990756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.994074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.994132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.994146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:01.998380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:01.998422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:01.998436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:02.003177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:02.003237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:02.003251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.289 [2024-07-25 12:27:02.008635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.289 [2024-07-25 12:27:02.008680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.289 [2024-07-25 12:27:02.008704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.012817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.012858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.012873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.016502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.016543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.016556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.021281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.021332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.021348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.025987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.026043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.026072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.030006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.030063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.030093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.034494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.034537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.034551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.037486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.037529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.037544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.042627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.042670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.042694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.046506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.046548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.046562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.049902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.049961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.049976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.055723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.055782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.055796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.061025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.061069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.061084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.063882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.063922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.063935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.069447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.069490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.069504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.074889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.074948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.074978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.078448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.078489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.078503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.083163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.083208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.083222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.290 [2024-07-25 12:27:02.087280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1031fd0) 00:19:47.290 [2024-07-25 12:27:02.087333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.290 [2024-07-25 12:27:02.087348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.290 00:19:47.290 Latency(us) 00:19:47.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.290 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:47.290 nvme0n1 : 2.00 7254.39 906.80 0.00 0.00 2201.04 610.68 10604.92 00:19:47.290 =================================================================================================================== 00:19:47.290 Total : 7254.39 906.80 0.00 0.00 2201.04 610.68 10604.92 00:19:47.290 0 00:19:47.290 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:47.290 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:47.290 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:47.290 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:47.290 | .driver_specific 00:19:47.290 | .nvme_error 00:19:47.290 | .status_code 00:19:47.290 | .command_transient_transport_error' 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 468 > 0 )) 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93143 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93143 ']' 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93143 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93143 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:47.549 killing process with pid 93143 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93143' 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93143 00:19:47.549 Received shutdown signal, test time was about 2.000000 seconds 00:19:47.549 00:19:47.549 Latency(us) 00:19:47.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.549 =================================================================================================================== 00:19:47.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.549 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93143 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93229 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93229 /var/tmp/bperf.sock 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93229 ']' 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.807 12:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:48.065 [2024-07-25 12:27:02.695549] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:48.065 [2024-07-25 12:27:02.695661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93229 ] 00:19:48.065 [2024-07-25 12:27:02.833031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.323 [2024-07-25 12:27:02.965510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.257 12:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.257 12:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:19:49.257 12:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:49.257 12:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:49.257 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:49.257 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.257 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:49.257 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.257 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:49.257 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:49.823 nvme0n1 00:19:49.823 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:49.823 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.823 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:49.823 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.823 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:49.823 12:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:49.823 Running I/O for 2 seconds... 00:19:49.823 [2024-07-25 12:27:04.537647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ee5c8 00:19:49.823 [2024-07-25 12:27:04.538576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.823 [2024-07-25 12:27:04.538623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:49.823 [2024-07-25 12:27:04.549024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e2c28 00:19:49.824 [2024-07-25 12:27:04.549790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.549833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.564547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f7538 00:19:49.824 [2024-07-25 12:27:04.566363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.566413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.576397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e2c28 00:19:49.824 [2024-07-25 12:27:04.577958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.578004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.587726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f8e88 00:19:49.824 [2024-07-25 12:27:04.589107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.589153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.599368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e5a90 00:19:49.824 [2024-07-25 12:27:04.600760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.600801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.611170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ea248 00:19:49.824 [2024-07-25 12:27:04.612243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.612282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.622491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e27f0 00:19:49.824 [2024-07-25 12:27:04.623478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.623535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.634108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190dfdc0 00:19:49.824 [2024-07-25 12:27:04.634891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.634931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.648801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f81e0 00:19:49.824 [2024-07-25 12:27:04.649745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.649787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.660561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e1f80 00:19:49.824 [2024-07-25 12:27:04.661441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.661483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.671673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e38d0 00:19:49.824 [2024-07-25 12:27:04.672766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.672810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:49.824 [2024-07-25 12:27:04.686696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f6890 00:19:49.824 [2024-07-25 12:27:04.688328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:49.824 [2024-07-25 12:27:04.688374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.698188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f4f40 00:19:50.084 [2024-07-25 12:27:04.699670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.699711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.710048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ebfd0 00:19:50.084 [2024-07-25 12:27:04.711435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.711475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.721294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190de038 00:19:50.084 [2024-07-25 12:27:04.722432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.722473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.733298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fb480 00:19:50.084 [2024-07-25 12:27:04.734343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.734384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.748025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190eea00 00:19:50.084 [2024-07-25 12:27:04.749836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.749893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.756974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f4f40 00:19:50.084 [2024-07-25 12:27:04.757762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.757815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.771628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e9e10 00:19:50.084 [2024-07-25 12:27:04.773033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.773123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.782820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fd640 00:19:50.084 [2024-07-25 12:27:04.784203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.784245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.795042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ea680 00:19:50.084 [2024-07-25 12:27:04.796217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.796257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.807745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e3060 00:19:50.084 [2024-07-25 12:27:04.808885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.808927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.819213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e9e10 00:19:50.084 [2024-07-25 12:27:04.820210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.820279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.833941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e7818 00:19:50.084 [2024-07-25 12:27:04.835755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.835817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.842857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fda78 00:19:50.084 [2024-07-25 12:27:04.843783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.843836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.857888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ea680 00:19:50.084 [2024-07-25 12:27:04.859379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.859435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.869496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e95a0 00:19:50.084 [2024-07-25 12:27:04.870757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.870815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.881400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f9b30 00:19:50.084 [2024-07-25 12:27:04.882677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.882734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.893973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190de038 00:19:50.084 [2024-07-25 12:27:04.894722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.894762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.905691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190de038 00:19:50.084 [2024-07-25 12:27:04.906317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.906369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.919678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fbcf0 00:19:50.084 [2024-07-25 12:27:04.921141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.084 [2024-07-25 12:27:04.921202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.084 [2024-07-25 12:27:04.931373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fdeb0 00:19:50.084 [2024-07-25 12:27:04.932787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.085 [2024-07-25 12:27:04.932827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.085 [2024-07-25 12:27:04.943069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190eaef0 00:19:50.085 [2024-07-25 12:27:04.944178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.085 [2024-07-25 12:27:04.944232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:04.954570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e1710 00:19:50.344 [2024-07-25 12:27:04.955455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:04.955517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:04.965927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e6fa8 00:19:50.344 [2024-07-25 12:27:04.966671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:04.966710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:04.981099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ed920 00:19:50.344 [2024-07-25 12:27:04.982889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:04.982942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:04.990401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190eee38 00:19:50.344 [2024-07-25 12:27:04.991379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:04.991434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.005236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e6738 00:19:50.344 [2024-07-25 12:27:05.006943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.006996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.016636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f5be8 00:19:50.344 [2024-07-25 12:27:05.018028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.018113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.028239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f96f8 00:19:50.344 [2024-07-25 12:27:05.029568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.029624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.042377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fb048 00:19:50.344 [2024-07-25 12:27:05.044306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.044399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.050771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ea248 00:19:50.344 [2024-07-25 12:27:05.051803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.051856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.065162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e3498 00:19:50.344 [2024-07-25 12:27:05.066895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.066947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.076194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f7da8 00:19:50.344 [2024-07-25 12:27:05.077790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.077865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.088013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ec840 00:19:50.344 [2024-07-25 12:27:05.089436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.089508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.098996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fb048 00:19:50.344 [2024-07-25 12:27:05.100216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.100269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.110820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ff3c8 00:19:50.344 [2024-07-25 12:27:05.111951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.112004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.344 [2024-07-25 12:27:05.125532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190de8a8 00:19:50.344 [2024-07-25 12:27:05.127314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.344 [2024-07-25 12:27:05.127377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:50.345 [2024-07-25 12:27:05.134188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f6458 00:19:50.345 [2024-07-25 12:27:05.135033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.345 [2024-07-25 12:27:05.135085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:50.345 [2024-07-25 12:27:05.148492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e0a68 00:19:50.345 [2024-07-25 12:27:05.149954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.345 [2024-07-25 12:27:05.150011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:50.345 [2024-07-25 12:27:05.159445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f2d80 00:19:50.345 [2024-07-25 12:27:05.160799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.345 [2024-07-25 12:27:05.160840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:50.345 [2024-07-25 12:27:05.170712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e5a90 00:19:50.345 [2024-07-25 12:27:05.171926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.345 [2024-07-25 12:27:05.171980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.345 [2024-07-25 12:27:05.184533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fef90 00:19:50.345 [2024-07-25 12:27:05.186396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.345 [2024-07-25 12:27:05.186464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:50.345 [2024-07-25 12:27:05.193234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fe720 00:19:50.345 [2024-07-25 12:27:05.194248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.345 [2024-07-25 12:27:05.194323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:50.345 [2024-07-25 12:27:05.207639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190eb328 00:19:50.603 [2024-07-25 12:27:05.209257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.603 [2024-07-25 12:27:05.209333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:50.603 [2024-07-25 12:27:05.218276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190eee38 00:19:50.603 [2024-07-25 12:27:05.220111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.603 [2024-07-25 12:27:05.220162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.603 [2024-07-25 12:27:05.231182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190dfdc0 00:19:50.603 [2024-07-25 12:27:05.232188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.603 [2024-07-25 12:27:05.232236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.603 [2024-07-25 12:27:05.243044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e12d8 00:19:50.603 [2024-07-25 12:27:05.243804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.603 [2024-07-25 12:27:05.243841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.603 [2024-07-25 12:27:05.255095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e84c0 00:19:50.604 [2024-07-25 12:27:05.255736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.255777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.268573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e99d8 00:19:50.604 [2024-07-25 12:27:05.269989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.270045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.279699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fc128 00:19:50.604 [2024-07-25 12:27:05.280919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.280960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.290625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ed920 00:19:50.604 [2024-07-25 12:27:05.291728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.291780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.301479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e4578 00:19:50.604 [2024-07-25 12:27:05.302434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.302488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.312203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f0ff8 00:19:50.604 [2024-07-25 12:27:05.313023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.313079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.326491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f4f40 00:19:50.604 [2024-07-25 12:27:05.328186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.328238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.334998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f5378 00:19:50.604 [2024-07-25 12:27:05.335927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.335980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.348333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190dfdc0 00:19:50.604 [2024-07-25 12:27:05.349929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.349982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.358912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f3e60 00:19:50.604 [2024-07-25 12:27:05.360277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.360352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.370096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f6020 00:19:50.604 [2024-07-25 12:27:05.371362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.371399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.381373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f9b30 00:19:50.604 [2024-07-25 12:27:05.382356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.382395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.393034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f8a50 00:19:50.604 [2024-07-25 12:27:05.393884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.393938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.408565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e0a68 00:19:50.604 [2024-07-25 12:27:05.410439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.410478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.419576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f0788 00:19:50.604 [2024-07-25 12:27:05.421374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.421444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.430482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fac10 00:19:50.604 [2024-07-25 12:27:05.432039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.432099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.440302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e73e0 00:19:50.604 [2024-07-25 12:27:05.441221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.441325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.453995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e8d30 00:19:50.604 [2024-07-25 12:27:05.455708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.455760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:50.604 [2024-07-25 12:27:05.465543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e5658 00:19:50.604 [2024-07-25 12:27:05.466972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.604 [2024-07-25 12:27:05.467027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.477183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f20d8 00:19:50.863 [2024-07-25 12:27:05.478614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.478654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.488858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f8a50 00:19:50.863 [2024-07-25 12:27:05.490006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.490060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.500861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e6b70 00:19:50.863 [2024-07-25 12:27:05.501999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.502055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.515166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e5220 00:19:50.863 [2024-07-25 12:27:05.516993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.517035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.523705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e4578 00:19:50.863 [2024-07-25 12:27:05.524485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.524525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.538100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f4f40 00:19:50.863 [2024-07-25 12:27:05.539598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.539638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.549264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190df550 00:19:50.863 [2024-07-25 12:27:05.550522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.550585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.561204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f2510 00:19:50.863 [2024-07-25 12:27:05.562467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.562521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.575597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190eb328 00:19:50.863 [2024-07-25 12:27:05.577481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.577534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.584249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f4298 00:19:50.863 [2024-07-25 12:27:05.585111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.585151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.598699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fc560 00:19:50.863 [2024-07-25 12:27:05.600285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.600346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.609983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fe2e8 00:19:50.863 [2024-07-25 12:27:05.611253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.611321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.621594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f1430 00:19:50.863 [2024-07-25 12:27:05.622869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.622910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.636035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fef90 00:19:50.863 [2024-07-25 12:27:05.637914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.637970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.644547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e5a90 00:19:50.863 [2024-07-25 12:27:05.645510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.645566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.658789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ea680 00:19:50.863 [2024-07-25 12:27:05.660411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.660467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.669923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f5be8 00:19:50.863 [2024-07-25 12:27:05.671270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.671326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.681463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e99d8 00:19:50.863 [2024-07-25 12:27:05.682787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.682829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.695867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e5658 00:19:50.863 [2024-07-25 12:27:05.697885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.697949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.704471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e84c0 00:19:50.863 [2024-07-25 12:27:05.705541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.863 [2024-07-25 12:27:05.705582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:50.863 [2024-07-25 12:27:05.718614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f8a50 00:19:50.863 [2024-07-25 12:27:05.720318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.864 [2024-07-25 12:27:05.720366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:50.864 [2024-07-25 12:27:05.727298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f7538 00:19:51.123 [2024-07-25 12:27:05.728034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.728088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.741632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e1f80 00:19:51.123 [2024-07-25 12:27:05.743007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.743064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.752797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e4578 00:19:51.123 [2024-07-25 12:27:05.753932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.753988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.764519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fd208 00:19:51.123 [2024-07-25 12:27:05.765619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.765677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.778659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190df550 00:19:51.123 [2024-07-25 12:27:05.780441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.780496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.790552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f57b0 00:19:51.123 [2024-07-25 12:27:05.792308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.792370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.801753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f35f0 00:19:51.123 [2024-07-25 12:27:05.803368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.803423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.812266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e1b48 00:19:51.123 [2024-07-25 12:27:05.813686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.813743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.823890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f8e88 00:19:51.123 [2024-07-25 12:27:05.825185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.825227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.837988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e7c50 00:19:51.123 [2024-07-25 12:27:05.839974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.840030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.846382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fe2e8 00:19:51.123 [2024-07-25 12:27:05.847400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.847439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.860116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f46d0 00:19:51.123 [2024-07-25 12:27:05.861823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.861893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.871037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f7538 00:19:51.123 [2024-07-25 12:27:05.872497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.872553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.882163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fe720 00:19:51.123 [2024-07-25 12:27:05.883606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.883645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.893234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e7c50 00:19:51.123 [2024-07-25 12:27:05.894398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.123 [2024-07-25 12:27:05.894451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:51.123 [2024-07-25 12:27:05.904263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e49b0 00:19:51.123 [2024-07-25 12:27:05.905394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.124 [2024-07-25 12:27:05.905450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:51.124 [2024-07-25 12:27:05.918366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ec408 00:19:51.124 [2024-07-25 12:27:05.920094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.124 [2024-07-25 12:27:05.920135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:51.124 [2024-07-25 12:27:05.926898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f5be8 00:19:51.124 [2024-07-25 12:27:05.927670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.124 [2024-07-25 12:27:05.927709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:51.124 [2024-07-25 12:27:05.941222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f4b08 00:19:51.124 [2024-07-25 12:27:05.942667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.124 [2024-07-25 12:27:05.942706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:51.124 [2024-07-25 12:27:05.952096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e12d8 00:19:51.124 [2024-07-25 12:27:05.953385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.124 [2024-07-25 12:27:05.953442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:51.124 [2024-07-25 12:27:05.963522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f92c0 00:19:51.124 [2024-07-25 12:27:05.964670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.124 [2024-07-25 12:27:05.964749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:51.124 [2024-07-25 12:27:05.977387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e6300 00:19:51.124 [2024-07-25 12:27:05.979232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.124 [2024-07-25 12:27:05.979289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:51.124 [2024-07-25 12:27:05.985830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190eb328 00:19:51.124 [2024-07-25 12:27:05.986721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.124 [2024-07-25 12:27:05.986790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:05.999936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f0bc0 00:19:51.384 [2024-07-25 12:27:06.001547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.001604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.010264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f57b0 00:19:51.384 [2024-07-25 12:27:06.012142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.012224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.022957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190de8a8 00:19:51.384 [2024-07-25 12:27:06.024496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.024562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.033932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fb480 00:19:51.384 [2024-07-25 12:27:06.035232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.035286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.045256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e3498 00:19:51.384 [2024-07-25 12:27:06.046542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.046596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.059060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e1710 00:19:51.384 [2024-07-25 12:27:06.061080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.061123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.067424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e4578 00:19:51.384 [2024-07-25 12:27:06.068421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.068476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.079022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e4de8 00:19:51.384 [2024-07-25 12:27:06.080162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.080215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.092450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e6fa8 00:19:51.384 [2024-07-25 12:27:06.094209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.094262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.100549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190efae0 00:19:51.384 [2024-07-25 12:27:06.101393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.101477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.114120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e3d08 00:19:51.384 [2024-07-25 12:27:06.115639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.115690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.124551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190eb760 00:19:51.384 [2024-07-25 12:27:06.125865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.125921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.135346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f5378 00:19:51.384 [2024-07-25 12:27:06.136606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.136657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.148952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190de038 00:19:51.384 [2024-07-25 12:27:06.150821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.150876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.157402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e6738 00:19:51.384 [2024-07-25 12:27:06.158356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.158417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.170737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e0a68 00:19:51.384 [2024-07-25 12:27:06.172254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.172317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.180999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ebfd0 00:19:51.384 [2024-07-25 12:27:06.182388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.182444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.191829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fcdd0 00:19:51.384 [2024-07-25 12:27:06.193215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.193275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:51.384 [2024-07-25 12:27:06.205225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ecc78 00:19:51.384 [2024-07-25 12:27:06.207116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.384 [2024-07-25 12:27:06.207167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:51.385 [2024-07-25 12:27:06.213280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e3498 00:19:51.385 [2024-07-25 12:27:06.214309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.385 [2024-07-25 12:27:06.214373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:51.385 [2024-07-25 12:27:06.226767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f7970 00:19:51.385 [2024-07-25 12:27:06.228387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.385 [2024-07-25 12:27:06.228440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:51.385 [2024-07-25 12:27:06.237885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e7818 00:19:51.385 [2024-07-25 12:27:06.239524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.385 [2024-07-25 12:27:06.239576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:51.385 [2024-07-25 12:27:06.246144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ee190 00:19:51.385 [2024-07-25 12:27:06.247061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.385 [2024-07-25 12:27:06.247113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.259422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190eb328 00:19:51.644 [2024-07-25 12:27:06.260976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.261066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.270036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f9b30 00:19:51.644 [2024-07-25 12:27:06.271318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.271414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.281015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f2d80 00:19:51.644 [2024-07-25 12:27:06.282294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.282392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.292337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e4de8 00:19:51.644 [2024-07-25 12:27:06.293605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.293663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.303207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fef90 00:19:51.644 [2024-07-25 12:27:06.304419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.304472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.316478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190eaef0 00:19:51.644 [2024-07-25 12:27:06.318332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.318403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.324491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e4578 00:19:51.644 [2024-07-25 12:27:06.325329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.325413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.338404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f8618 00:19:51.644 [2024-07-25 12:27:06.340107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.340160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.349066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f2d80 00:19:51.644 [2024-07-25 12:27:06.350652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.350704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.357431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f5378 00:19:51.644 [2024-07-25 12:27:06.358242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.358341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.370851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e0630 00:19:51.644 [2024-07-25 12:27:06.372321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.372387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.381300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fd208 00:19:51.644 [2024-07-25 12:27:06.382503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.382557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.392074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f8e88 00:19:51.644 [2024-07-25 12:27:06.393288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.644 [2024-07-25 12:27:06.393370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:51.644 [2024-07-25 12:27:06.406737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e7c50 00:19:51.644 [2024-07-25 12:27:06.408607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.645 [2024-07-25 12:27:06.408662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:51.645 [2024-07-25 12:27:06.415583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190ea248 00:19:51.645 [2024-07-25 12:27:06.416459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.645 [2024-07-25 12:27:06.416499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:51.645 [2024-07-25 12:27:06.430118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e6738 00:19:51.645 [2024-07-25 12:27:06.431799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.645 [2024-07-25 12:27:06.431868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:51.645 [2024-07-25 12:27:06.441706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190dece0 00:19:51.645 [2024-07-25 12:27:06.443088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.645 [2024-07-25 12:27:06.443146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:51.645 [2024-07-25 12:27:06.452800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fda78 00:19:51.645 [2024-07-25 12:27:06.453917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.645 [2024-07-25 12:27:06.453971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:51.645 [2024-07-25 12:27:06.465314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e1710 00:19:51.645 [2024-07-25 12:27:06.466849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.645 [2024-07-25 12:27:06.466904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:51.645 [2024-07-25 12:27:06.473749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f7538 00:19:51.645 [2024-07-25 12:27:06.474545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.645 [2024-07-25 12:27:06.474600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:51.645 [2024-07-25 12:27:06.487249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190f8e88 00:19:51.645 [2024-07-25 12:27:06.488620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.645 [2024-07-25 12:27:06.488675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:51.645 [2024-07-25 12:27:06.497464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190fcdd0 00:19:51.645 [2024-07-25 12:27:06.498665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.645 [2024-07-25 12:27:06.498716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:51.645 [2024-07-25 12:27:06.508243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e0630 00:19:51.904 [2024-07-25 12:27:06.509465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.904 [2024-07-25 12:27:06.509524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:51.904 [2024-07-25 12:27:06.522106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed6fe0) with pdu=0x2000190e6300 00:19:51.904 [2024-07-25 12:27:06.523988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.904 [2024-07-25 12:27:06.524040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:51.904 00:19:51.904 Latency(us) 00:19:51.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.904 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:51.904 nvme0n1 : 2.01 21527.83 84.09 0.00 0.00 5939.34 2249.08 16801.05 00:19:51.904 =================================================================================================================== 00:19:51.904 Total : 21527.83 84.09 0.00 0.00 5939.34 2249.08 16801.05 00:19:51.904 0 00:19:51.904 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:51.904 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:51.904 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:51.904 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:51.904 | .driver_specific 00:19:51.904 | .nvme_error 00:19:51.904 | .status_code 00:19:51.904 | .command_transient_transport_error' 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93229 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93229 ']' 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93229 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93229 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:52.163 killing process with pid 93229 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93229' 00:19:52.163 Received shutdown signal, test time was about 2.000000 seconds 00:19:52.163 00:19:52.163 Latency(us) 00:19:52.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.163 =================================================================================================================== 00:19:52.163 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93229 00:19:52.163 12:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93229 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93319 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93319 /var/tmp/bperf.sock 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93319 ']' 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.422 12:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:52.422 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:52.422 Zero copy mechanism will not be used. 00:19:52.422 [2024-07-25 12:27:07.120076] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:52.422 [2024-07-25 12:27:07.120197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93319 ] 00:19:52.422 [2024-07-25 12:27:07.258660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.685 [2024-07-25 12:27:07.380784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.251 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.251 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:19:53.251 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:53.251 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:53.819 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:53.819 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.819 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:53.819 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.819 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:53.819 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:54.077 nvme0n1 00:19:54.077 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:54.077 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.077 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:54.077 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.077 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:54.077 12:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:54.077 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:54.077 Zero copy mechanism will not be used. 00:19:54.077 Running I/O for 2 seconds... 00:19:54.077 [2024-07-25 12:27:08.888936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.077 [2024-07-25 12:27:08.889356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.077 [2024-07-25 12:27:08.889424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.077 [2024-07-25 12:27:08.894386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.077 [2024-07-25 12:27:08.894763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.077 [2024-07-25 12:27:08.894808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.077 [2024-07-25 12:27:08.899941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.077 [2024-07-25 12:27:08.900299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.077 [2024-07-25 12:27:08.900366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.077 [2024-07-25 12:27:08.905342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.077 [2024-07-25 12:27:08.905735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.078 [2024-07-25 12:27:08.905777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.078 [2024-07-25 12:27:08.910678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.078 [2024-07-25 12:27:08.911037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.078 [2024-07-25 12:27:08.911099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.078 [2024-07-25 12:27:08.916027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.078 [2024-07-25 12:27:08.916374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.078 [2024-07-25 12:27:08.916425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.078 [2024-07-25 12:27:08.921316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.078 [2024-07-25 12:27:08.921693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.078 [2024-07-25 12:27:08.921736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.078 [2024-07-25 12:27:08.926678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.078 [2024-07-25 12:27:08.927019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.078 [2024-07-25 12:27:08.927078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.078 [2024-07-25 12:27:08.932003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.078 [2024-07-25 12:27:08.932407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.078 [2024-07-25 12:27:08.932473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.078 [2024-07-25 12:27:08.937689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.078 [2024-07-25 12:27:08.938044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.078 [2024-07-25 12:27:08.938096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.943078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.943428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.943468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.948411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.948812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.948881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.953795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.954168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.954212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.959221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.959637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.959679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.964694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.965056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.965099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.970078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.970477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.970519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.975575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.975924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.975965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.981035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.981454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.981494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.986500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.986855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.986906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.991921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.992290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.992344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:08.997364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:08.997737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:08.997781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.002758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.003126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.003178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.008172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.008602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.008647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.013768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.014138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.014180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.019242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.019643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.019686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.024568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.024974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.025023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.029968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.030364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.030418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.035424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.035815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.035857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.040872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.041295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.041348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.046391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.046759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.046801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.051715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.052083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.052149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.057188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.057579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.057624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.062567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.338 [2024-07-25 12:27:09.062941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.338 [2024-07-25 12:27:09.062982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.338 [2024-07-25 12:27:09.067944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.068309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.068371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.073282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.073653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.073695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.078664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.079041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.079089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.084048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.084450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.084501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.089650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.090026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.090069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.094995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.095360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.095409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.100415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.100789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.100831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.105838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.106210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.106253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.111343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.111703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.111753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.116642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.117027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.117070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.122016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.122393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.122449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.127556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.127908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.127950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.132942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.133326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.133379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.138389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.138735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.138781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.143823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.144217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.144266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.149444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.149801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.149844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.155109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.155474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.155517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.160796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.161153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.161206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.166504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.166858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.166900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.172156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.172494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.172536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.177955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.178305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.178360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.183730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.184085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.184128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.189483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.189864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.189908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.195118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.195480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.195527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.339 [2024-07-25 12:27:09.200888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.339 [2024-07-25 12:27:09.201237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.339 [2024-07-25 12:27:09.201281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.599 [2024-07-25 12:27:09.206470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.599 [2024-07-25 12:27:09.206824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.599 [2024-07-25 12:27:09.206865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.599 [2024-07-25 12:27:09.212026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.599 [2024-07-25 12:27:09.212395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.599 [2024-07-25 12:27:09.212438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.599 [2024-07-25 12:27:09.217651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.599 [2024-07-25 12:27:09.218003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.599 [2024-07-25 12:27:09.218048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.599 [2024-07-25 12:27:09.223163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.599 [2024-07-25 12:27:09.223535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.599 [2024-07-25 12:27:09.223581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.599 [2024-07-25 12:27:09.228527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.599 [2024-07-25 12:27:09.228897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.599 [2024-07-25 12:27:09.228939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.599 [2024-07-25 12:27:09.234158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.599 [2024-07-25 12:27:09.234545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.599 [2024-07-25 12:27:09.234589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.599 [2024-07-25 12:27:09.239744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.599 [2024-07-25 12:27:09.240119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.240163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.245182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.245575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.245618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.250781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.251138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.251182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.256151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.256550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.256600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.261673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.262043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.262087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.267332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.267716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.267759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.272667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.273037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.273079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.278092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.278500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.278542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.283694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.284068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.284114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.289156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.289566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.289608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.294677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.295060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.295102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.300141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.300535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.300576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.305720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.306050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.306102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.311191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.311564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.311607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.316929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.317312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.317368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.322455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.322803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.322847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.327970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.328317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.328374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.333572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.333921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.333978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.339019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.339395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.339437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.344448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.344831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.344874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.350020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.350361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.350419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.355759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.356107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.356149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.361331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.361737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.361779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.366884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.367257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.367311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.372409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.372804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.372853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.377869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.378240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.600 [2024-07-25 12:27:09.378282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.600 [2024-07-25 12:27:09.383341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.600 [2024-07-25 12:27:09.383733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.383775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.388974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.389384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.389429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.394497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.394863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.394906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.399909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.400270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.400330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.405428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.405795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.405838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.410819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.411186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.411229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.416175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.416582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.416625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.421676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.422050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.422093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.427082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.427463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.427501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.432563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.432965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.433004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.438071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.438436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.438485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.443527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.443880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.443923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.448915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.449296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.449356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.454484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.454859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.454902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.601 [2024-07-25 12:27:09.460120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.601 [2024-07-25 12:27:09.460476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.601 [2024-07-25 12:27:09.460513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.861 [2024-07-25 12:27:09.465745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.861 [2024-07-25 12:27:09.466090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.861 [2024-07-25 12:27:09.466132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.861 [2024-07-25 12:27:09.471264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.861 [2024-07-25 12:27:09.471604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.861 [2024-07-25 12:27:09.471648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.861 [2024-07-25 12:27:09.476812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.861 [2024-07-25 12:27:09.477145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.861 [2024-07-25 12:27:09.477186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.861 [2024-07-25 12:27:09.482430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.861 [2024-07-25 12:27:09.482739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.861 [2024-07-25 12:27:09.482796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.861 [2024-07-25 12:27:09.488099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.861 [2024-07-25 12:27:09.488485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.861 [2024-07-25 12:27:09.488527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.861 [2024-07-25 12:27:09.493852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.494204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.494248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.499531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.499918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.499966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.504951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.505288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.505360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.510428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.510747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.510795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.515576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.515914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.515975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.520815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.521229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.521282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.526482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.526865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.526920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.531636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.531994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.532036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.536381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.536665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.536757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.541151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.541467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.541529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.545812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.546052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.546129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.550477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.550764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.550835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.555115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.555364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.555437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.559857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.560085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.560140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.564634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.564886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.564927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.569301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.569558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.569630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.574066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.574296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.574395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.578719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.578946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.579001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.583444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.583659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.583713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.588105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.588334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.588403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.592920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.593191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.593260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.597689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.597907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.597961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.602596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.602831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.602885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.607410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.607687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.607727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.612232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.612516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.612560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.617033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.617317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.617383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.862 [2024-07-25 12:27:09.622159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.862 [2024-07-25 12:27:09.622435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.862 [2024-07-25 12:27:09.622472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.626886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.627120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.627173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.631632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.631871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.631906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.636273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.636525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.636596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.641120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.641399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.641456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.646031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.646289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.646358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.650924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.651174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.651228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.655934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.656222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.656328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.660932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.661217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.661279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.665970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.666232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.666305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.670841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.671073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.671126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.675688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.675893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.675946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.680416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.680629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.680746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.685086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.685293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.685375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.689773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.689977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.690030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.694473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.694721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.694789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.699037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.699255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.699372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.703705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.703907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.703950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.708420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.708648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.708743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.713136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.713356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.713421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.717698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.717921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.717957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.863 [2024-07-25 12:27:09.722417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:54.863 [2024-07-25 12:27:09.722622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.863 [2024-07-25 12:27:09.722693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.123 [2024-07-25 12:27:09.727042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.123 [2024-07-25 12:27:09.727249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.123 [2024-07-25 12:27:09.727301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.123 [2024-07-25 12:27:09.731737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.123 [2024-07-25 12:27:09.731946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.123 [2024-07-25 12:27:09.731999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.123 [2024-07-25 12:27:09.736443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.123 [2024-07-25 12:27:09.736649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.123 [2024-07-25 12:27:09.736709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.123 [2024-07-25 12:27:09.741300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.123 [2024-07-25 12:27:09.741541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.123 [2024-07-25 12:27:09.741633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.123 [2024-07-25 12:27:09.746107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.123 [2024-07-25 12:27:09.746364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.123 [2024-07-25 12:27:09.746432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.123 [2024-07-25 12:27:09.750845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.123 [2024-07-25 12:27:09.751055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.751108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.755585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.755826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.755896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.760437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.760680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.760766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.765281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.765511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.765597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.770157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.770439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.770476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.774904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.775165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.775219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.779692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.780044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.780092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.784454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.784681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.784776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.789209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.789446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.789500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.793957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.794185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.794221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.798799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.799008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.799062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.803568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.803775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.803811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.808297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.808546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.808632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.812999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.813292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.813375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.817829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.818066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.818135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.822551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.822792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.822844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.827440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.827648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.827684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.832079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.832322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.832373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.836857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.837102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.837187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.841647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.841925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.841964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.846547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.846787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.846873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.851201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.851486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.851543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.856136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.856383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.856451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.860950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.861210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.861266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.865695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.865923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.865977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.870491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.870771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.870842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.875414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.124 [2024-07-25 12:27:09.875648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.124 [2024-07-25 12:27:09.875688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.124 [2024-07-25 12:27:09.880146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.880387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.880426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.885005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.885291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.885375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.889866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.890103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.890131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.894971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.895420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.895592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.900055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.900506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.900680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.904992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.905416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.905586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.909954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.910356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.910527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.915012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.915421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.915574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.919967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.920209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.920264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.924738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.924984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.925025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.929642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.929873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.929933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.934295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.934534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.934571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.939355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.939578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.939632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.944056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.944292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.944374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.948921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.949196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.949277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.953839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.954068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.954124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.958620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.958848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.958943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.963333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.963563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.963636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.968217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.968457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.968511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.973165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.973412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.973478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.978223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.978496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.978567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.125 [2024-07-25 12:27:09.983027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.125 [2024-07-25 12:27:09.983251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.125 [2024-07-25 12:27:09.983367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.385 [2024-07-25 12:27:09.987977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.385 [2024-07-25 12:27:09.988208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.385 [2024-07-25 12:27:09.988263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.385 [2024-07-25 12:27:09.992719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.385 [2024-07-25 12:27:09.992969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.385 [2024-07-25 12:27:09.993010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.385 [2024-07-25 12:27:09.997571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.385 [2024-07-25 12:27:09.997826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.385 [2024-07-25 12:27:09.997863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.385 [2024-07-25 12:27:10.002409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.385 [2024-07-25 12:27:10.002639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.002680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.007126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.007377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.007418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.011951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.012180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.012266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.016960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.017217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.017288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.021924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.022167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.022221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.026677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.026921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.026977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.031536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.031817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.031874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.036512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.036788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.036829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.041432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.041661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.041721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.046357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.046645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.046683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.051291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.051570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.051609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.056187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.056430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.056499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.060922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.061163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.061220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.065903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.066120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.066173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.070900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.071146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.071217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.075763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.076005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.076064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.080601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.080843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.080883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.085440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.085659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.085729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.090206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.090491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.090564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.094970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.095198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.095234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.099834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.100052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.100106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.104644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.104895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.104935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.109582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.109830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.109902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.114461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.114676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.114733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.119342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.119604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.119641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.124143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.124428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.124484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.386 [2024-07-25 12:27:10.128888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.386 [2024-07-25 12:27:10.129164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.386 [2024-07-25 12:27:10.129234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.133794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.134024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.134061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.138649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.138879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.138916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.143453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.143732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.143802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.148344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.148571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.148659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.153110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.153374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.153410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.157957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.158204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.158276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.162681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.162894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.162949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.167484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.167724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.167779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.172161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.172402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.172478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.177033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.177305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.177404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.181890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.182124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.182179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.186723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.186958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.187013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.191489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.191743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.191797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.196329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.196562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.196649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.201147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.201381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.201450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.206094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.206327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.206410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.210889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.211170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.211241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.215722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.215966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.216052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.220437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.220670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.220750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.225248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.225541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.225579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.230055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.230313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.230366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.234782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.235032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.235086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.239557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.239807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.239863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.244315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.244556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.244638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.387 [2024-07-25 12:27:10.249010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.387 [2024-07-25 12:27:10.249265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.387 [2024-07-25 12:27:10.249361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.253981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.254243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.254340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.258762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.259011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.259080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.263555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.263804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.263858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.268357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.268565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.268633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.273193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.273468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.273539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.278074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.278342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.278412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.282885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.283163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.283219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.287752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.288006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.288092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.292626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.292898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.292939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.297399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.297618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.297688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.302063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.302306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.302390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.306852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.307086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.307156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.311580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.311793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.311848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.316281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.316541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.316579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.321077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.321352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.321428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.325953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.648 [2024-07-25 12:27:10.326182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.648 [2024-07-25 12:27:10.326268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.648 [2024-07-25 12:27:10.330675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.330893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.330963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.335367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.335631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.335668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.340195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.340483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.340524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.344977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.345245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.345331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.349813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.350028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.350080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.354713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.354940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.355000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.359528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.359792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.359847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.364403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.364638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.364677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.369278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.369529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.369582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.373989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.374237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.374315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.378921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.379151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.379207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.383812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.384044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.384119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.388641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.388891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.388932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.393435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.393695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.393749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.398226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.398511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.398569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.403031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.403270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.403340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.407953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.408171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.408207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.412780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.413014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.413055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.417537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.417795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.417866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.422454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.649 [2024-07-25 12:27:10.422701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.649 [2024-07-25 12:27:10.422740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.649 [2024-07-25 12:27:10.427272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.427563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.427607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.432034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.432275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.432364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.436879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.437151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.437206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.441771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.442016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.442058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.446580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.446833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.446895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.451414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.451642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.451701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.456251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.456619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.456670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.461072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.461333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.461404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.465767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.466001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.466039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.470570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.470811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.470865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.475407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.475635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.475690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.480259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.480518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.480574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.485071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.485355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.485405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.489932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.490192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.490234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.494708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.494928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.494967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.499534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.499779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.499837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.504257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.504507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.504558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.650 [2024-07-25 12:27:10.509112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.650 [2024-07-25 12:27:10.509392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.650 [2024-07-25 12:27:10.509438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.513948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.514162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.514201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.518725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.518976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.519016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.523579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.523845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.523883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.528415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.528676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.528771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.533264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.533474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.533515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.538094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.538346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.538406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.543020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.543229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.543266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.547868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.548123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.548160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.552888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.553129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.553211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.557826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.558073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.558143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.562860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.563062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.563098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.567609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.567825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.567879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.572516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.572794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.572833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.577498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.577733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.577795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.582573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.582804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.582840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.587449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.587675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.587730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.592489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.592717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.592755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.597126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.597428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.597469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.602048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.602280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.602349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.606845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.607062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.607120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.611916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.612114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.612151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.616667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.911 [2024-07-25 12:27:10.616922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.911 [2024-07-25 12:27:10.616963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.911 [2024-07-25 12:27:10.621499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.621735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.621786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.626350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.626588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.626624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.631240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.631513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.631555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.636179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.636419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.636460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.641165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.641407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.641446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.645980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.646235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.646319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.650679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.650902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.650937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.655437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.655631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.655669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.660201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.660424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.660459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.665068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.665362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.665432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.670076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.670295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.670397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.674908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.675126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.675178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.679765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.679966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.680001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.684618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.684905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.684946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.689365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.689606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.689690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.694258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.694507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.694542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.699041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.699296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.699385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.703787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.703998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.704050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.708560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.708835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.708891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.713380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.713641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.713712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.718400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.718683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.718724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.723397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.723646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.723700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.728188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.728438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.728531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.732964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.733234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.733341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.737879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.738104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.738142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.742822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.743054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.743107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.912 [2024-07-25 12:27:10.747607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.912 [2024-07-25 12:27:10.747834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.912 [2024-07-25 12:27:10.747871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.913 [2024-07-25 12:27:10.752291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.913 [2024-07-25 12:27:10.752525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.913 [2024-07-25 12:27:10.752583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.913 [2024-07-25 12:27:10.757179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.913 [2024-07-25 12:27:10.757412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.913 [2024-07-25 12:27:10.757463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.913 [2024-07-25 12:27:10.762117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.913 [2024-07-25 12:27:10.762355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.913 [2024-07-25 12:27:10.762423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.913 [2024-07-25 12:27:10.766878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.913 [2024-07-25 12:27:10.767136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.913 [2024-07-25 12:27:10.767180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.913 [2024-07-25 12:27:10.771730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:55.913 [2024-07-25 12:27:10.771934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.913 [2024-07-25 12:27:10.771975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.172 [2024-07-25 12:27:10.776442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.172 [2024-07-25 12:27:10.776637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.172 [2024-07-25 12:27:10.776712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.172 [2024-07-25 12:27:10.781304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.172 [2024-07-25 12:27:10.781529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.172 [2024-07-25 12:27:10.781577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.172 [2024-07-25 12:27:10.786157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.172 [2024-07-25 12:27:10.786395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.172 [2024-07-25 12:27:10.786447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.172 [2024-07-25 12:27:10.790967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.172 [2024-07-25 12:27:10.791197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.172 [2024-07-25 12:27:10.791259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.172 [2024-07-25 12:27:10.795719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.172 [2024-07-25 12:27:10.795925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.172 [2024-07-25 12:27:10.795960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.172 [2024-07-25 12:27:10.800568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.800800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.800840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.805231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.805492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.805578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.810139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.810445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.810486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.814910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.815104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.815139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.819663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.819913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.819967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.824457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.824673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.824768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.829461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.829705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.829745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.834280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.834633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.834670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.839466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.839701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.839757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.844385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.844604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.844657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.849268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.849524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.849579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.854401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.854612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.854652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.859430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.859685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.859725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.864195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.864455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.864526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.869117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.869330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.869365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.873899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.874135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.874174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.173 [2024-07-25 12:27:10.878739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed7180) with pdu=0x2000190fef90 00:19:56.173 [2024-07-25 12:27:10.878954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.173 [2024-07-25 12:27:10.878990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.173 00:19:56.173 Latency(us) 00:19:56.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.173 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:56.173 nvme0n1 : 2.00 6156.51 769.56 0.00 0.00 2592.59 1809.69 7357.91 00:19:56.173 =================================================================================================================== 00:19:56.173 Total : 6156.51 769.56 0.00 0.00 2592.59 1809.69 7357.91 00:19:56.173 0 00:19:56.173 12:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:56.173 12:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:56.173 12:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:56.173 12:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:56.173 | .driver_specific 00:19:56.173 | .nvme_error 00:19:56.173 | .status_code 00:19:56.173 | .command_transient_transport_error' 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 397 > 0 )) 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93319 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93319 ']' 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93319 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93319 00:19:56.432 killing process with pid 93319 00:19:56.432 Received shutdown signal, test time was about 2.000000 seconds 00:19:56.432 00:19:56.432 Latency(us) 00:19:56.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.432 =================================================================================================================== 00:19:56.432 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93319' 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93319 00:19:56.432 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93319 00:19:56.691 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93003 00:19:56.691 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93003 ']' 00:19:56.691 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93003 00:19:56.691 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:56.691 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.691 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93003 00:19:56.691 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:56.691 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:56.691 killing process with pid 93003 00:19:56.692 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93003' 00:19:56.692 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93003 00:19:56.692 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93003 00:19:56.950 ************************************ 00:19:56.950 END TEST nvmf_digest_error 00:19:56.950 ************************************ 00:19:56.950 00:19:56.950 real 0m19.051s 00:19:56.950 user 0m36.291s 00:19:56.950 sys 0m5.061s 00:19:56.950 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:56.950 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:56.950 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:56.950 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:56.950 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:56.950 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.209 rmmod nvme_tcp 00:19:57.209 rmmod nvme_fabrics 00:19:57.209 rmmod nvme_keyring 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93003 ']' 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93003 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 93003 ']' 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 93003 00:19:57.209 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93003) - No such process 00:19:57.209 Process with pid 93003 is not found 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 93003 is not found' 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:57.209 00:19:57.209 real 0m39.519s 00:19:57.209 user 1m13.023s 00:19:57.209 sys 0m11.014s 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:57.209 ************************************ 00:19:57.209 END TEST nvmf_digest 00:19:57.209 ************************************ 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:19:57.209 12:27:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:57.209 12:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:57.209 12:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:57.209 12:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.209 ************************************ 00:19:57.209 START TEST nvmf_mdns_discovery 00:19:57.209 ************************************ 00:19:57.209 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:57.468 * Looking for test storage... 00:19:57.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.468 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:57.469 Cannot find device "nvmf_tgt_br" 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.469 Cannot find device "nvmf_tgt_br2" 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:57.469 Cannot find device "nvmf_tgt_br" 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:57.469 Cannot find device "nvmf_tgt_br2" 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:57.469 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:57.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:19:57.729 00:19:57.729 --- 10.0.0.2 ping statistics --- 00:19:57.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.729 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:57.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:57.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:19:57.729 00:19:57.729 --- 10.0.0.3 ping statistics --- 00:19:57.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.729 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:57.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:57.729 00:19:57.729 --- 10.0.0.1 ping statistics --- 00:19:57.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.729 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=93615 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 93615 00:19:57.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 93615 ']' 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.729 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.730 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.730 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.730 12:27:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.730 [2024-07-25 12:27:12.577167] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:57.730 [2024-07-25 12:27:12.577279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.988 [2024-07-25 12:27:12.720245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.246 [2024-07-25 12:27:12.858582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.246 [2024-07-25 12:27:12.858647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.246 [2024-07-25 12:27:12.858664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.246 [2024-07-25 12:27:12.858675] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.246 [2024-07-25 12:27:12.858685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.246 [2024-07-25 12:27:12.858719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.812 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.070 [2024-07-25 12:27:13.746044] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.070 [2024-07-25 12:27:13.754219] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.070 null0 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.070 null1 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.070 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.070 null2 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.071 null3 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.071 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=93671 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 93671 /tmp/host.sock 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 93671 ']' 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:59.071 12:27:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.071 [2024-07-25 12:27:13.894097] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:19:59.071 [2024-07-25 12:27:13.894786] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93671 ] 00:19:59.328 [2024-07-25 12:27:14.047297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.328 [2024-07-25 12:27:14.178760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.263 12:27:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.263 12:27:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:20:00.263 12:27:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:00.263 12:27:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:00.263 12:27:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:00.263 12:27:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=93700 00:20:00.263 12:27:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:00.263 12:27:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:00.263 12:27:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:00.263 Process 981 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:00.263 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:00.263 Successfully dropped root privileges. 00:20:00.263 avahi-daemon 0.8 starting up. 00:20:01.198 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:01.198 Successfully called chroot(). 00:20:01.198 Successfully dropped remaining capabilities. 00:20:01.198 No service file found in /etc/avahi/services. 00:20:01.198 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:01.198 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:01.198 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:01.198 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:01.198 Network interface enumeration completed. 00:20:01.198 Registering new address record for fe80::40d4:56ff:fec8:de4d on nvmf_tgt_if2.*. 00:20:01.198 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:20:01.198 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:20:01.198 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:20:01.198 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 535813392. 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:01.198 12:27:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.198 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:20:01.198 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:20:01.198 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:01.198 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.198 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.198 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:01.198 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.198 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:01.198 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.457 [2024-07-25 12:27:16.259208] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.457 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 [2024-07-25 12:27:16.322960] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 [2024-07-25 12:27:16.362933] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 [2024-07-25 12:27:16.370897] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.716 12:27:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:20:02.651 [2024-07-25 12:27:17.159217] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:02.910 [2024-07-25 12:27:17.759265] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:02.910 [2024-07-25 12:27:17.759321] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:02.910 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:02.910 cookie is 0 00:20:02.910 is_local: 1 00:20:02.910 our_own: 0 00:20:02.910 wide_area: 0 00:20:02.910 multicast: 1 00:20:02.910 cached: 1 00:20:03.169 [2024-07-25 12:27:17.859270] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:03.169 [2024-07-25 12:27:17.859316] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:03.169 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:03.169 cookie is 0 00:20:03.169 is_local: 1 00:20:03.169 our_own: 0 00:20:03.169 wide_area: 0 00:20:03.169 multicast: 1 00:20:03.169 cached: 1 00:20:03.169 [2024-07-25 12:27:17.859333] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:03.169 [2024-07-25 12:27:17.959231] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:03.169 [2024-07-25 12:27:17.959255] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:03.169 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:03.169 cookie is 0 00:20:03.169 is_local: 1 00:20:03.169 our_own: 0 00:20:03.169 wide_area: 0 00:20:03.169 multicast: 1 00:20:03.169 cached: 1 00:20:03.428 [2024-07-25 12:27:18.059244] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:03.428 [2024-07-25 12:27:18.059269] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:03.428 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:03.428 cookie is 0 00:20:03.428 is_local: 1 00:20:03.428 our_own: 0 00:20:03.428 wide_area: 0 00:20:03.428 multicast: 1 00:20:03.428 cached: 1 00:20:03.428 [2024-07-25 12:27:18.059280] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:04.023 [2024-07-25 12:27:18.771051] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:04.023 [2024-07-25 12:27:18.771095] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:04.023 [2024-07-25 12:27:18.771116] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:04.023 [2024-07-25 12:27:18.857197] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:20:04.284 [2024-07-25 12:27:18.914651] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:04.284 [2024-07-25 12:27:18.914683] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:04.284 [2024-07-25 12:27:18.970618] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:04.284 [2024-07-25 12:27:18.970652] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:04.284 [2024-07-25 12:27:18.970673] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:04.284 [2024-07-25 12:27:19.056772] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:20:04.284 [2024-07-25 12:27:19.112972] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:04.284 [2024-07-25 12:27:19.113002] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:06.812 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:06.813 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.071 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:07.072 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.072 12:27:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.007 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.267 [2024-07-25 12:27:22.926551] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:08.267 [2024-07-25 12:27:22.926809] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:08.267 [2024-07-25 12:27:22.926860] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:08.267 [2024-07-25 12:27:22.926915] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:08.267 [2024-07-25 12:27:22.926934] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.267 [2024-07-25 12:27:22.934444] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:08.267 [2024-07-25 12:27:22.934786] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:08.267 [2024-07-25 12:27:22.934855] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.267 12:27:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:20:08.267 [2024-07-25 12:27:23.065946] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:20:08.267 [2024-07-25 12:27:23.066199] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:20:08.267 [2024-07-25 12:27:23.125283] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:08.267 [2024-07-25 12:27:23.125341] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:08.267 [2024-07-25 12:27:23.125351] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:08.267 [2024-07-25 12:27:23.125372] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:08.267 [2024-07-25 12:27:23.125487] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:08.267 [2024-07-25 12:27:23.125498] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:08.267 [2024-07-25 12:27:23.125504] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:08.267 [2024-07-25 12:27:23.125520] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:08.525 [2024-07-25 12:27:23.171065] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:08.525 [2024-07-25 12:27:23.171090] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:08.525 [2024-07-25 12:27:23.171158] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:08.525 [2024-07-25 12:27:23.171170] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:09.088 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:20:09.088 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:09.088 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:09.088 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.088 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.088 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:09.088 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:09.345 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.345 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:09.345 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:20:09.345 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:09.345 12:27:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.345 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.345 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.345 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:09.345 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:09.345 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.345 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:09.345 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:20:09.345 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:09.345 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.346 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.605 [2024-07-25 12:27:24.235600] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:09.605 [2024-07-25 12:27:24.235646] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:09.605 [2024-07-25 12:27:24.235705] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:09.605 [2024-07-25 12:27:24.235724] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.605 [2024-07-25 12:27:24.243322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.605 [2024-07-25 12:27:24.243411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.605 [2024-07-25 12:27:24.243429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.605 [2024-07-25 12:27:24.243440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.605 [2024-07-25 12:27:24.243451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.605 [2024-07-25 12:27:24.243461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.605 [2024-07-25 12:27:24.243472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.605 [2024-07-25 12:27:24.243482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.605 [2024-07-25 12:27:24.243492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.605 [2024-07-25 12:27:24.243599] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:09.605 [2024-07-25 12:27:24.243654] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.605 12:27:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:20:09.605 [2024-07-25 12:27:24.250629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.605 [2024-07-25 12:27:24.250666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.605 [2024-07-25 12:27:24.250682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.605 [2024-07-25 12:27:24.250692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.605 [2024-07-25 12:27:24.250704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.605 [2024-07-25 12:27:24.250714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.605 [2024-07-25 12:27:24.250726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.605 [2024-07-25 12:27:24.250736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.605 [2024-07-25 12:27:24.250747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.605 [2024-07-25 12:27:24.253257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.605 [2024-07-25 12:27:24.260587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.605 [2024-07-25 12:27:24.263282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.605 [2024-07-25 12:27:24.263473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.605 [2024-07-25 12:27:24.263502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.605 [2024-07-25 12:27:24.263516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.605 [2024-07-25 12:27:24.263538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.605 [2024-07-25 12:27:24.263557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.605 [2024-07-25 12:27:24.263568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.605 [2024-07-25 12:27:24.263581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.605 [2024-07-25 12:27:24.263599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.605 [2024-07-25 12:27:24.270601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.605 [2024-07-25 12:27:24.270752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.605 [2024-07-25 12:27:24.270776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.605 [2024-07-25 12:27:24.270788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.605 [2024-07-25 12:27:24.270828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.605 [2024-07-25 12:27:24.270847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.605 [2024-07-25 12:27:24.270884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.605 [2024-07-25 12:27:24.270895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.605 [2024-07-25 12:27:24.270913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.605 [2024-07-25 12:27:24.273398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.605 [2024-07-25 12:27:24.273528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.605 [2024-07-25 12:27:24.273552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.605 [2024-07-25 12:27:24.273565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.605 [2024-07-25 12:27:24.273584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.605 [2024-07-25 12:27:24.273601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.605 [2024-07-25 12:27:24.273612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.605 [2024-07-25 12:27:24.273622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.605 [2024-07-25 12:27:24.273639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.605 [2024-07-25 12:27:24.280786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.605 [2024-07-25 12:27:24.280893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.605 [2024-07-25 12:27:24.280918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.605 [2024-07-25 12:27:24.280931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.605 [2024-07-25 12:27:24.280950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.605 [2024-07-25 12:27:24.280968] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.605 [2024-07-25 12:27:24.280979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.605 [2024-07-25 12:27:24.280989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.605 [2024-07-25 12:27:24.281005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.605 [2024-07-25 12:27:24.283478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.606 [2024-07-25 12:27:24.283602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.606 [2024-07-25 12:27:24.283626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.606 [2024-07-25 12:27:24.283638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.606 [2024-07-25 12:27:24.283657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.606 [2024-07-25 12:27:24.283674] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.606 [2024-07-25 12:27:24.283685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.606 [2024-07-25 12:27:24.283695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.606 [2024-07-25 12:27:24.283712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.606 [2024-07-25 12:27:24.290849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.606 [2024-07-25 12:27:24.290986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.606 [2024-07-25 12:27:24.291011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.606 [2024-07-25 12:27:24.291023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.606 [2024-07-25 12:27:24.291042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.606 [2024-07-25 12:27:24.291060] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.606 [2024-07-25 12:27:24.291070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.606 [2024-07-25 12:27:24.291080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.606 [2024-07-25 12:27:24.291112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.606 [2024-07-25 12:27:24.293566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.606 [2024-07-25 12:27:24.293693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.606 [2024-07-25 12:27:24.293717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.606 [2024-07-25 12:27:24.293729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.606 [2024-07-25 12:27:24.293748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.606 [2024-07-25 12:27:24.293764] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.606 [2024-07-25 12:27:24.293775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.606 [2024-07-25 12:27:24.293784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.606 [2024-07-25 12:27:24.293835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.606 [2024-07-25 12:27:24.300948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.606 [2024-07-25 12:27:24.301063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.606 [2024-07-25 12:27:24.301103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.606 [2024-07-25 12:27:24.301116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.606 [2024-07-25 12:27:24.301134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.606 [2024-07-25 12:27:24.301151] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.606 [2024-07-25 12:27:24.301161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.606 [2024-07-25 12:27:24.301171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.606 [2024-07-25 12:27:24.301203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.606 [2024-07-25 12:27:24.303658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.606 [2024-07-25 12:27:24.303780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.606 [2024-07-25 12:27:24.303804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.606 [2024-07-25 12:27:24.303816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.606 [2024-07-25 12:27:24.303834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.606 [2024-07-25 12:27:24.303870] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.606 [2024-07-25 12:27:24.303884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.606 [2024-07-25 12:27:24.303910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.606 [2024-07-25 12:27:24.303941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.606 [2024-07-25 12:27:24.311028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.606 [2024-07-25 12:27:24.311160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.606 [2024-07-25 12:27:24.311183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.606 [2024-07-25 12:27:24.311195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.606 [2024-07-25 12:27:24.311214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.606 [2024-07-25 12:27:24.311231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.606 [2024-07-25 12:27:24.311241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.606 [2024-07-25 12:27:24.311251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.606 [2024-07-25 12:27:24.311268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.606 [2024-07-25 12:27:24.313730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.606 [2024-07-25 12:27:24.313859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.606 [2024-07-25 12:27:24.313884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.606 [2024-07-25 12:27:24.313896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.606 [2024-07-25 12:27:24.313915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.606 [2024-07-25 12:27:24.313951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.606 [2024-07-25 12:27:24.313965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.606 [2024-07-25 12:27:24.313976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.606 [2024-07-25 12:27:24.313993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.606 [2024-07-25 12:27:24.321136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.606 [2024-07-25 12:27:24.321261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.606 [2024-07-25 12:27:24.321285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.606 [2024-07-25 12:27:24.321297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.606 [2024-07-25 12:27:24.321329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.606 [2024-07-25 12:27:24.321348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.606 [2024-07-25 12:27:24.321359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.606 [2024-07-25 12:27:24.321368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.606 [2024-07-25 12:27:24.321383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.606 [2024-07-25 12:27:24.323806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.606 [2024-07-25 12:27:24.323930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.606 [2024-07-25 12:27:24.323954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.606 [2024-07-25 12:27:24.323968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.606 [2024-07-25 12:27:24.323999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.606 [2024-07-25 12:27:24.324039] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.606 [2024-07-25 12:27:24.324052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.606 [2024-07-25 12:27:24.324063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.606 [2024-07-25 12:27:24.324080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.606 [2024-07-25 12:27:24.331212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.606 [2024-07-25 12:27:24.331371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.606 [2024-07-25 12:27:24.331397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.606 [2024-07-25 12:27:24.331410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.606 [2024-07-25 12:27:24.331429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.606 [2024-07-25 12:27:24.331446] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.606 [2024-07-25 12:27:24.331472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.606 [2024-07-25 12:27:24.331500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.606 [2024-07-25 12:27:24.331517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.606 [2024-07-25 12:27:24.333879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.606 [2024-07-25 12:27:24.334014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.607 [2024-07-25 12:27:24.334038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.607 [2024-07-25 12:27:24.334055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.607 [2024-07-25 12:27:24.334074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.607 [2024-07-25 12:27:24.334164] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.607 [2024-07-25 12:27:24.334181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.607 [2024-07-25 12:27:24.334192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.607 [2024-07-25 12:27:24.334209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.607 [2024-07-25 12:27:24.341312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.607 [2024-07-25 12:27:24.341457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.607 [2024-07-25 12:27:24.341482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.607 [2024-07-25 12:27:24.341494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.607 [2024-07-25 12:27:24.341512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.607 [2024-07-25 12:27:24.341529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.607 [2024-07-25 12:27:24.341539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.607 [2024-07-25 12:27:24.341564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.607 [2024-07-25 12:27:24.341598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.607 [2024-07-25 12:27:24.343979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.607 [2024-07-25 12:27:24.344104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.607 [2024-07-25 12:27:24.344128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.607 [2024-07-25 12:27:24.344156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.607 [2024-07-25 12:27:24.344186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.607 [2024-07-25 12:27:24.344225] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.607 [2024-07-25 12:27:24.344238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.607 [2024-07-25 12:27:24.344249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.607 [2024-07-25 12:27:24.344265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.607 [2024-07-25 12:27:24.351419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.607 [2024-07-25 12:27:24.351550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.607 [2024-07-25 12:27:24.351574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.607 [2024-07-25 12:27:24.351586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.607 [2024-07-25 12:27:24.351605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.607 [2024-07-25 12:27:24.351628] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.607 [2024-07-25 12:27:24.351639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.607 [2024-07-25 12:27:24.351649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.607 [2024-07-25 12:27:24.351665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.607 [2024-07-25 12:27:24.354051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.607 [2024-07-25 12:27:24.354175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.607 [2024-07-25 12:27:24.354198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.607 [2024-07-25 12:27:24.354210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.607 [2024-07-25 12:27:24.354228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.607 [2024-07-25 12:27:24.354261] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.607 [2024-07-25 12:27:24.354274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.607 [2024-07-25 12:27:24.354283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.607 [2024-07-25 12:27:24.354331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.607 [2024-07-25 12:27:24.361513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.607 [2024-07-25 12:27:24.361639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.607 [2024-07-25 12:27:24.361663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.607 [2024-07-25 12:27:24.361675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.607 [2024-07-25 12:27:24.361692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.607 [2024-07-25 12:27:24.361709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.607 [2024-07-25 12:27:24.361718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.607 [2024-07-25 12:27:24.361728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.607 [2024-07-25 12:27:24.361743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.607 [2024-07-25 12:27:24.364142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.607 [2024-07-25 12:27:24.364265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.607 [2024-07-25 12:27:24.364289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.607 [2024-07-25 12:27:24.364301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.607 [2024-07-25 12:27:24.364349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.607 [2024-07-25 12:27:24.364387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.607 [2024-07-25 12:27:24.364401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.607 [2024-07-25 12:27:24.364411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.607 [2024-07-25 12:27:24.364428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.607 [2024-07-25 12:27:24.371605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.607 [2024-07-25 12:27:24.371733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.607 [2024-07-25 12:27:24.371757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703380 with addr=10.0.0.3, port=4420 00:20:09.607 [2024-07-25 12:27:24.371769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703380 is same with the state(5) to be set 00:20:09.607 [2024-07-25 12:27:24.371787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703380 (9): Bad file descriptor 00:20:09.607 [2024-07-25 12:27:24.371804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.607 [2024-07-25 12:27:24.371814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.607 [2024-07-25 12:27:24.371824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.607 [2024-07-25 12:27:24.371840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.607 [2024-07-25 12:27:24.374230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.607 [2024-07-25 12:27:24.374368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.607 [2024-07-25 12:27:24.374392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17036b0 with addr=10.0.0.2, port=4420 00:20:09.607 [2024-07-25 12:27:24.374404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17036b0 is same with the state(5) to be set 00:20:09.607 [2024-07-25 12:27:24.374422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17036b0 (9): Bad file descriptor 00:20:09.607 [2024-07-25 12:27:24.374470] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:20:09.607 [2024-07-25 12:27:24.374493] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:09.607 [2024-07-25 12:27:24.374546] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:09.607 [2024-07-25 12:27:24.374589] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:09.607 [2024-07-25 12:27:24.374608] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:09.607 [2024-07-25 12:27:24.374624] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:09.607 [2024-07-25 12:27:24.374664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.607 [2024-07-25 12:27:24.374680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.607 [2024-07-25 12:27:24.374691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.607 [2024-07-25 12:27:24.374722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.607 [2024-07-25 12:27:24.460553] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:09.607 [2024-07-25 12:27:24.460633] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:20:10.540 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:10.541 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.799 12:27:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:20:10.799 [2024-07-25 12:27:25.559431] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:11.732 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:20:11.732 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:11.732 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.732 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:11.732 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.732 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:11.732 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:11.732 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:11.990 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.991 [2024-07-25 12:27:26.780676] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:20:11.991 2024/07/25 12:27:26 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:11.991 request: 00:20:11.991 { 00:20:11.991 "method": "bdev_nvme_start_mdns_discovery", 00:20:11.991 "params": { 00:20:11.991 "name": "mdns", 00:20:11.991 "svcname": "_nvme-disc._http", 00:20:11.991 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:11.991 } 00:20:11.991 } 00:20:11.991 Got JSON-RPC error response 00:20:11.991 GoRPCClient: error on JSON-RPC call 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:11.991 12:27:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:20:12.556 [2024-07-25 12:27:27.369486] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:12.814 [2024-07-25 12:27:27.469491] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:12.814 [2024-07-25 12:27:27.569498] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:12.814 [2024-07-25 12:27:27.569537] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:12.814 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:12.814 cookie is 0 00:20:12.814 is_local: 1 00:20:12.814 our_own: 0 00:20:12.814 wide_area: 0 00:20:12.814 multicast: 1 00:20:12.814 cached: 1 00:20:12.814 [2024-07-25 12:27:27.669516] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:12.814 [2024-07-25 12:27:27.669573] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:12.814 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:12.814 cookie is 0 00:20:12.814 is_local: 1 00:20:12.814 our_own: 0 00:20:12.814 wide_area: 0 00:20:12.814 multicast: 1 00:20:12.814 cached: 1 00:20:12.814 [2024-07-25 12:27:27.669602] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:13.072 [2024-07-25 12:27:27.769507] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:13.072 [2024-07-25 12:27:27.769559] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:13.072 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:13.072 cookie is 0 00:20:13.072 is_local: 1 00:20:13.072 our_own: 0 00:20:13.072 wide_area: 0 00:20:13.072 multicast: 1 00:20:13.072 cached: 1 00:20:13.072 [2024-07-25 12:27:27.869510] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:13.072 [2024-07-25 12:27:27.869560] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:13.072 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:13.072 cookie is 0 00:20:13.072 is_local: 1 00:20:13.072 our_own: 0 00:20:13.072 wide_area: 0 00:20:13.072 multicast: 1 00:20:13.072 cached: 1 00:20:13.072 [2024-07-25 12:27:27.869580] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:14.034 [2024-07-25 12:27:28.583080] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:14.034 [2024-07-25 12:27:28.583129] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:14.034 [2024-07-25 12:27:28.583164] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:14.034 [2024-07-25 12:27:28.669246] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:20:14.034 [2024-07-25 12:27:28.729902] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:14.034 [2024-07-25 12:27:28.729974] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:14.034 [2024-07-25 12:27:28.782740] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:14.034 [2024-07-25 12:27:28.782776] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:14.034 [2024-07-25 12:27:28.782810] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:14.034 [2024-07-25 12:27:28.868852] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:20:14.293 [2024-07-25 12:27:28.929222] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:14.293 [2024-07-25 12:27:28.929254] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.630 [2024-07-25 12:27:31.962895] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:20:17.630 2024/07/25 12:27:31 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:17.630 request: 00:20:17.630 { 00:20:17.630 "method": "bdev_nvme_start_mdns_discovery", 00:20:17.630 "params": { 00:20:17.630 "name": "cdc", 00:20:17.630 "svcname": "_nvme-disc._tcp", 00:20:17.630 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:17.630 } 00:20:17.630 } 00:20:17.630 Got JSON-RPC error response 00:20:17.630 GoRPCClient: error on JSON-RPC call 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:17.630 12:27:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 93671 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 93671 00:20:17.630 [2024-07-25 12:27:32.215938] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 93700 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:20:17.630 Got SIGTERM, quitting. 00:20:17.630 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:17.630 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:17.630 avahi-daemon 0.8 exiting. 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.630 rmmod nvme_tcp 00:20:17.630 rmmod nvme_fabrics 00:20:17.630 rmmod nvme_keyring 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 93615 ']' 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 93615 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # '[' -z 93615 ']' 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # kill -0 93615 00:20:17.630 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # uname 00:20:17.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:17.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93615 00:20:17.631 killing process with pid 93615 00:20:17.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:17.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:17.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93615' 00:20:17.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@969 -- # kill 93615 00:20:17.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@974 -- # wait 93615 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:18.197 ************************************ 00:20:18.197 END TEST nvmf_mdns_discovery 00:20:18.197 ************************************ 00:20:18.197 00:20:18.197 real 0m20.802s 00:20:18.197 user 0m40.471s 00:20:18.197 sys 0m2.061s 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.197 ************************************ 00:20:18.197 START TEST nvmf_host_multipath 00:20:18.197 ************************************ 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:18.197 * Looking for test storage... 00:20:18.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:18.197 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:18.198 Cannot find device "nvmf_tgt_br" 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:20:18.198 12:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.198 Cannot find device "nvmf_tgt_br2" 00:20:18.198 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:20:18.198 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:18.198 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:18.198 Cannot find device "nvmf_tgt_br" 00:20:18.198 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:20:18.198 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:18.198 Cannot find device "nvmf_tgt_br2" 00:20:18.198 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:18.198 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:18.456 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:18.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:20:18.457 00:20:18.457 --- 10.0.0.2 ping statistics --- 00:20:18.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.457 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:18.457 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:18.457 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:18.457 00:20:18.457 --- 10.0.0.3 ping statistics --- 00:20:18.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.457 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:18.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:18.457 00:20:18.457 --- 10.0.0.1 ping statistics --- 00:20:18.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.457 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:18.457 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:18.714 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94267 00:20:18.714 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94267 00:20:18.714 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:18.714 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 94267 ']' 00:20:18.714 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.714 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:18.714 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.715 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:18.715 12:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:18.715 [2024-07-25 12:27:33.374348] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:20:18.715 [2024-07-25 12:27:33.374683] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.715 [2024-07-25 12:27:33.512627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:18.972 [2024-07-25 12:27:33.644671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.972 [2024-07-25 12:27:33.645032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.972 [2024-07-25 12:27:33.645199] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.972 [2024-07-25 12:27:33.645388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.972 [2024-07-25 12:27:33.645433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.972 [2024-07-25 12:27:33.645650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.972 [2024-07-25 12:27:33.645665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.538 12:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:19.538 12:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:20:19.538 12:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.538 12:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:19.538 12:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:19.797 12:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.797 12:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94267 00:20:19.797 12:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:19.797 [2024-07-25 12:27:34.652761] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.055 12:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:20.313 Malloc0 00:20:20.313 12:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:20.571 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:20.829 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.829 [2024-07-25 12:27:35.681156] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.086 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:21.348 [2024-07-25 12:27:35.965354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:21.348 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94365 00:20:21.348 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:21.348 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.348 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94365 /var/tmp/bdevperf.sock 00:20:21.348 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 94365 ']' 00:20:21.348 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.348 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.348 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.348 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.348 12:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:22.288 12:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.288 12:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:20:22.288 12:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:22.547 12:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:22.805 Nvme0n1 00:20:22.805 12:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:23.370 Nvme0n1 00:20:23.370 12:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:23.370 12:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:24.304 12:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:24.304 12:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:24.561 12:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:24.820 12:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:24.820 12:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94458 00:20:24.820 12:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:24.820 12:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94267 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:31.386 Attaching 4 probes... 00:20:31.386 @path[10.0.0.2, 4421]: 17708 00:20:31.386 @path[10.0.0.2, 4421]: 18115 00:20:31.386 @path[10.0.0.2, 4421]: 17776 00:20:31.386 @path[10.0.0.2, 4421]: 17340 00:20:31.386 @path[10.0.0.2, 4421]: 17710 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:31.386 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94458 00:20:31.387 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:31.387 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:31.387 12:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:31.387 12:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:31.644 12:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:31.644 12:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94583 00:20:31.644 12:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94267 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:31.644 12:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:38.203 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:38.203 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:38.203 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:38.203 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:38.203 Attaching 4 probes... 00:20:38.203 @path[10.0.0.2, 4420]: 17814 00:20:38.203 @path[10.0.0.2, 4420]: 17484 00:20:38.203 @path[10.0.0.2, 4420]: 16956 00:20:38.203 @path[10.0.0.2, 4420]: 16394 00:20:38.203 @path[10.0.0.2, 4420]: 16859 00:20:38.203 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:38.203 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:38.203 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:38.203 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:38.203 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:38.203 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:38.204 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94583 00:20:38.204 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:38.204 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:38.204 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:38.204 12:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:38.462 12:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:38.462 12:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94267 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:38.462 12:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94719 00:20:38.462 12:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:45.018 Attaching 4 probes... 00:20:45.018 @path[10.0.0.2, 4421]: 13144 00:20:45.018 @path[10.0.0.2, 4421]: 16633 00:20:45.018 @path[10.0.0.2, 4421]: 16789 00:20:45.018 @path[10.0.0.2, 4421]: 16499 00:20:45.018 @path[10.0.0.2, 4421]: 16588 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94719 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:45.018 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:45.276 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:45.276 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94854 00:20:45.276 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94267 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:45.276 12:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:51.837 12:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:51.837 12:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:51.838 Attaching 4 probes... 00:20:51.838 00:20:51.838 00:20:51.838 00:20:51.838 00:20:51.838 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94854 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:51.838 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:52.096 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:52.096 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94267 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:52.096 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94980 00:20:52.096 12:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:58.652 12:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:58.652 12:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:58.652 Attaching 4 probes... 00:20:58.652 @path[10.0.0.2, 4421]: 16232 00:20:58.652 @path[10.0.0.2, 4421]: 16654 00:20:58.652 @path[10.0.0.2, 4421]: 16957 00:20:58.652 @path[10.0.0.2, 4421]: 16529 00:20:58.652 @path[10.0.0.2, 4421]: 17409 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94980 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:58.652 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:58.652 [2024-07-25 12:28:13.280365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.652 [2024-07-25 12:28:13.280964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.280971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.280979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.280986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.280994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.281002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.281010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.281019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.281027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.281034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.281042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 [2024-07-25 12:28:13.281050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113d330 is same with the state(5) to be set 00:20:58.653 12:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:59.586 12:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:59.586 12:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95116 00:20:59.586 12:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94267 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:59.586 12:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:06.145 Attaching 4 probes... 00:21:06.145 @path[10.0.0.2, 4420]: 17214 00:21:06.145 @path[10.0.0.2, 4420]: 17625 00:21:06.145 @path[10.0.0.2, 4420]: 17537 00:21:06.145 @path[10.0.0.2, 4420]: 17473 00:21:06.145 @path[10.0.0.2, 4420]: 16933 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95116 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:06.145 [2024-07-25 12:28:20.819820] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:06.145 12:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:06.403 12:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:12.961 12:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:12.961 12:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95303 00:21:12.961 12:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:12.961 12:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94267 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:19.532 Attaching 4 probes... 00:21:19.532 @path[10.0.0.2, 4421]: 14607 00:21:19.532 @path[10.0.0.2, 4421]: 16626 00:21:19.532 @path[10.0.0.2, 4421]: 17381 00:21:19.532 @path[10.0.0.2, 4421]: 16069 00:21:19.532 @path[10.0.0.2, 4421]: 16492 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95303 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94365 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 94365 ']' 00:21:19.532 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 94365 00:21:19.533 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:21:19.533 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:19.533 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94365 00:21:19.533 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:19.533 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:19.533 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94365' 00:21:19.533 killing process with pid 94365 00:21:19.533 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 94365 00:21:19.533 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 94365 00:21:19.533 Connection closed with partial response: 00:21:19.533 00:21:19.533 00:21:19.533 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94365 00:21:19.533 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:19.533 [2024-07-25 12:27:36.038336] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:21:19.533 [2024-07-25 12:27:36.038455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94365 ] 00:21:19.533 [2024-07-25 12:27:36.174067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.533 [2024-07-25 12:27:36.294181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.533 Running I/O for 90 seconds... 00:21:19.533 [2024-07-25 12:27:46.269851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.269920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.269998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.270020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.270043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.270058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.270080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.270094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.270114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.270128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.270149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.270164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.270184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.270198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.270218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.270233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.273747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.273786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.273816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.273833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.273854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.273897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.273921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.273937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.273958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.273972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.273993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.533 [2024-07-25 12:27:46.274113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.533 [2024-07-25 12:27:46.274524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.533 [2024-07-25 12:27:46.274538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.274559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.274574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.274595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.274609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.274630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.274644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.274665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.274679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.274701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.274715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:46.275930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.534 [2024-07-25 12:27:46.275965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.275986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.534 [2024-07-25 12:27:46.276001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.276022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.534 [2024-07-25 12:27:46.276036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.276057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.534 [2024-07-25 12:27:46.276071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.276092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.534 [2024-07-25 12:27:46.276112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.276134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.534 [2024-07-25 12:27:46.276150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:46.276171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.534 [2024-07-25 12:27:46.276186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:52.849816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.534 [2024-07-25 12:27:52.849889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:19.534 [2024-07-25 12:27:52.849967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.849989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.850011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.850026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.850055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.850070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.850090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.850105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.850125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.850139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.850175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.850189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.850211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.850225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.850855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.850885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.850913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.850930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.850976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.850991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.535 [2024-07-25 12:27:52.851489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.535 [2024-07-25 12:27:52.851526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.535 [2024-07-25 12:27:52.851563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.535 [2024-07-25 12:27:52.851599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.535 [2024-07-25 12:27:52.851635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.535 [2024-07-25 12:27:52.851671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.535 [2024-07-25 12:27:52.851707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.535 [2024-07-25 12:27:52.851742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.535 [2024-07-25 12:27:52.851778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:19.535 [2024-07-25 12:27:52.851800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.535 [2024-07-25 12:27:52.851814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.851836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.851852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.851874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.851896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.851919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.851935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.851956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.851971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.851993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.852965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.852987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.853002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.853024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.853039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.853213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.853237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.853267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.853284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.853327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.853345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.853371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.853387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.853413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.536 [2024-07-25 12:27:52.853428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:19.536 [2024-07-25 12:27:52.853455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.853961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.853987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:19.537 [2024-07-25 12:27:52.854636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.537 [2024-07-25 12:27:52.854652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.854678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:52.854693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.854720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:52.854734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.854760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:52.854775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.854802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:52.854818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.854844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:52.854859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.854885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:52.854900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.854927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:52.854941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.854967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.854982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:52.855940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.538 [2024-07-25 12:27:52.855955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:59.943594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:59.943729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:59.943769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:59.943788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:59.943811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:59.943826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:59.943848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:59.943863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:59.943884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:59.943899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:59.943942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:59.943959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:59.943979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:59.943994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:59.944015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.538 [2024-07-25 12:27:59.944029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:19.538 [2024-07-25 12:27:59.944050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.944492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.539 [2024-07-25 12:27:59.944529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.539 [2024-07-25 12:27:59.944564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.539 [2024-07-25 12:27:59.944600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.539 [2024-07-25 12:27:59.944635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.539 [2024-07-25 12:27:59.944670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.944702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.539 [2024-07-25 12:27:59.944720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.945639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.539 [2024-07-25 12:27:59.945667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.945696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.945713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.945735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.945750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.945770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.945796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.945819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.945835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.945856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.945870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.945891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.945906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.945927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.945942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.945964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.945978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.945999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:19.539 [2024-07-25 12:27:59.946418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.539 [2024-07-25 12:27:59.946433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.946892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.946906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.947976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.947996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.948018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.948033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.948054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.948069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.948089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.948104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.948125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.948140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.948160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.948175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.948195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.540 [2024-07-25 12:27:59.948210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:19.540 [2024-07-25 12:27:59.948231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.948965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.948980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.949670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.541 [2024-07-25 12:27:59.949684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.541 [2024-07-25 12:27:59.950446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:19.542 [2024-07-25 12:27:59.950886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.542 [2024-07-25 12:27:59.950902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.950923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.950938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.950958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.950973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.950994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.543 [2024-07-25 12:27:59.951390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.543 [2024-07-25 12:27:59.951426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.543 [2024-07-25 12:27:59.951461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.543 [2024-07-25 12:27:59.951502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.543 [2024-07-25 12:27:59.951545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.543 [2024-07-25 12:27:59.951582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.543 [2024-07-25 12:27:59.951617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.951957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.951978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:19.543 [2024-07-25 12:27:59.952000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.543 [2024-07-25 12:27:59.952014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.952795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.952810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.953975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.953989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.954015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.954030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.954051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.954066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.954087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.954108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.954130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.954145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.954166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.544 [2024-07-25 12:27:59.954181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:19.544 [2024-07-25 12:27:59.954202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.954216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.954238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.954253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.954275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.954289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.954329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.954345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.954366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.954381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.954402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.954417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.954438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.954453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.954474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.964962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.964982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:19.545 [2024-07-25 12:27:59.965851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.545 [2024-07-25 12:27:59.965871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.965901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.965920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.965950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.965969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.965999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.966019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.966049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.966069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.966098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.966118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.966148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.966167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.966197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.966217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.966248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.966276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.967496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.967545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.967586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.967609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.967640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.967662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.967692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.967712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.967742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.967761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.967792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.967812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.967841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.967861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.967890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.967911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.967940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.967961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.967990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.546 [2024-07-25 12:27:59.968840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.546 [2024-07-25 12:27:59.968892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.546 [2024-07-25 12:27:59.968944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.968974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.546 [2024-07-25 12:27:59.968994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.969024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.546 [2024-07-25 12:27:59.969044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.969077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.546 [2024-07-25 12:27:59.969097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:19.546 [2024-07-25 12:27:59.969127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.546 [2024-07-25 12:27:59.969147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.547 [2024-07-25 12:27:59.969197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.969976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.969996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.970813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.547 [2024-07-25 12:27:59.970834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.547 [2024-07-25 12:27:59.971885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.971924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.971962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.971986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.972947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.972981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.548 [2024-07-25 12:27:59.973952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.548 [2024-07-25 12:27:59.973972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.974956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.974985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.975005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.975034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.975062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.975093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.975114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.975144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.975163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.975193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.975213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.975243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.975263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.976965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.976985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.977014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.549 [2024-07-25 12:27:59.977047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:19.549 [2024-07-25 12:27:59.977067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.550 [2024-07-25 12:27:59.977558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.550 [2024-07-25 12:27:59.977599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.550 [2024-07-25 12:27:59.977633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.550 [2024-07-25 12:27:59.977668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.550 [2024-07-25 12:27:59.977702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.550 [2024-07-25 12:27:59.977737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.550 [2024-07-25 12:27:59.977780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.977965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.977986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.550 [2024-07-25 12:27:59.978492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:19.550 [2024-07-25 12:27:59.978512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.978548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.978583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.978618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.978653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.978698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.978734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.978769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.978804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.978839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.978876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.978902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.979628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.979656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.979684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.979700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.979722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.979736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.979758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.979772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.979792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.979807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.979827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.979841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.979862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.979887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.979909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.979924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.979945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.979959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.979980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.979994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:19.551 [2024-07-25 12:27:59.980636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.551 [2024-07-25 12:27:59.980650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.980670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.980684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.980717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.980731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.980752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.980766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.980793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.980808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.980829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.980843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.980864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.980877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.980898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.980912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.980933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.980947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.980967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.980981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:19.552 [2024-07-25 12:27:59.981847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.552 [2024-07-25 12:27:59.981862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.981883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.981897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.981917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.981931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.981953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.981967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.982708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.982736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.982761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.982778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.982799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.982814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.982835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.982849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.982882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.982898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.982919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.982934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.982954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.982969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.982990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.983721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.553 [2024-07-25 12:27:59.983757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.553 [2024-07-25 12:27:59.983798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.553 [2024-07-25 12:27:59.983835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.553 [2024-07-25 12:27:59.983870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.553 [2024-07-25 12:27:59.983905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.553 [2024-07-25 12:27:59.983940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.553 [2024-07-25 12:27:59.983975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.983996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.553 [2024-07-25 12:27:59.984010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.553 [2024-07-25 12:27:59.984030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.984974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.984988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.985008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.985023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.985044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.985058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.985761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.985788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.985815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.985831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.985863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.985879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.985900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.985914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.985935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.985949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.985969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.985983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.986004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.986018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.986039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.986053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.986073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.986088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.986108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.986123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:19.554 [2024-07-25 12:27:59.986143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.554 [2024-07-25 12:27:59.986158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.986978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.986998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.987486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.987500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.995516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.995564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.995598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.995620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.555 [2024-07-25 12:27:59.995649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.555 [2024-07-25 12:27:59.995668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.995697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.995716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.995745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.995782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.995812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.995832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.995861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.995880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.995909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.995928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.995966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.995980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.996971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.996986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.997026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.997067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.997107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.997147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.997202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.997243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.997283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.997344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.997385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.556 [2024-07-25 12:27:59.997425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:19.556 [2024-07-25 12:27:59.997451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.997466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.997506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.997546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.997587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.997627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.997667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.997717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.997758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.997799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.997839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-25 12:27:59.997880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-25 12:27:59.997920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-25 12:27:59.997961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.997987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-25 12:27:59.998001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-25 12:27:59.998041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-25 12:27:59.998081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-25 12:27:59.998122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.998962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.998988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.999004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:19.557 [2024-07-25 12:27:59.999029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.557 [2024-07-25 12:27:59.999044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:27:59.999070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:27:59.999084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:27:59.999110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:27:59.999125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:27:59.999150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:27:59.999165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:27:59.999191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:27:59.999206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:27:59.999238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:27:59.999253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:27:59.999280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:27:59.999306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:27:59.999475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:27:59.999498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.282413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.282462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.282520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.282542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.282565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.282581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.282603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.282618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.282639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.282654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.282675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.282690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.282711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.282726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.282747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.282762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.282999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.558 [2024-07-25 12:28:13.283021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.558 [2024-07-25 12:28:13.283734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.558 [2024-07-25 12:28:13.283749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.283763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.283779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.283792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.283807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.283837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.283854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.283867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.283882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.283895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.283910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.283924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.283939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.283952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.283968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.283981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.283996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.559 [2024-07-25 12:28:13.284274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.559 [2024-07-25 12:28:13.284315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.559 [2024-07-25 12:28:13.284345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.559 [2024-07-25 12:28:13.284374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.559 [2024-07-25 12:28:13.284402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.559 [2024-07-25 12:28:13.284431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.559 [2024-07-25 12:28:13.284460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.559 [2024-07-25 12:28:13.284488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.559 [2024-07-25 12:28:13.284516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.559 [2024-07-25 12:28:13.284544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.559 [2024-07-25 12:28:13.284923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.559 [2024-07-25 12:28:13.284936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.284952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.284971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.284987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.285981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.285994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.286009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.286022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.286037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.286050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.286066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.286079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.286094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.286114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.286130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.286143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.286158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.560 [2024-07-25 12:28:13.286175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.560 [2024-07-25 12:28:13.286190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.561 [2024-07-25 12:28:13.286203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.561 [2024-07-25 12:28:13.286231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.561 [2024-07-25 12:28:13.286259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.561 [2024-07-25 12:28:13.286288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.561 [2024-07-25 12:28:13.286330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.561 [2024-07-25 12:28:13.286359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.561 [2024-07-25 12:28:13.286392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.561 [2024-07-25 12:28:13.286420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.561 [2024-07-25 12:28:13.286449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.561 [2024-07-25 12:28:13.286477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.561 [2024-07-25 12:28:13.286513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.561 [2024-07-25 12:28:13.286542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.561 [2024-07-25 12:28:13.286570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.561 [2024-07-25 12:28:13.286599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.561 [2024-07-25 12:28:13.286647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32512 len:8 PRP1 0x0 PRP2 0x0 00:21:19.561 [2024-07-25 12:28:13.286661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.286728] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa98250 was disconnected and freed. reset controller. 00:21:19.561 [2024-07-25 12:28:13.288478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.561 [2024-07-25 12:28:13.288563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.561 [2024-07-25 12:28:13.288587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.561 [2024-07-25 12:28:13.288620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa149c0 (9): Bad file descriptor 00:21:19.561 [2024-07-25 12:28:13.288960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.561 [2024-07-25 12:28:13.288994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa149c0 with addr=10.0.0.2, port=4421 00:21:19.561 [2024-07-25 12:28:13.289011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa149c0 is same with the state(5) to be set 00:21:19.561 [2024-07-25 12:28:13.289255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa149c0 (9): Bad file descriptor 00:21:19.561 [2024-07-25 12:28:13.289329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:19.561 [2024-07-25 12:28:13.289351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:19.561 [2024-07-25 12:28:13.289365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.561 [2024-07-25 12:28:13.289547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:19.561 [2024-07-25 12:28:13.289572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.561 [2024-07-25 12:28:23.350088] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.561 Received shutdown signal, test time was about 55.332826 seconds 00:21:19.561 00:21:19.561 Latency(us) 00:21:19.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.561 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:19.561 Verification LBA range: start 0x0 length 0x4000 00:21:19.561 Nvme0n1 : 55.33 7274.85 28.42 0.00 0.00 17564.62 476.63 7076934.75 00:21:19.561 =================================================================================================================== 00:21:19.561 Total : 7274.85 28.42 0.00 0.00 17564.62 476.63 7076934.75 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.561 rmmod nvme_tcp 00:21:19.561 rmmod nvme_fabrics 00:21:19.561 rmmod nvme_keyring 00:21:19.561 12:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94267 ']' 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94267 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 94267 ']' 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 94267 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94267 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:19.561 killing process with pid 94267 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94267' 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 94267 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 94267 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.561 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.562 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.562 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:19.562 00:21:19.562 real 1m1.482s 00:21:19.562 user 2m54.299s 00:21:19.562 sys 0m13.457s 00:21:19.562 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:19.562 12:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:19.562 ************************************ 00:21:19.562 END TEST nvmf_host_multipath 00:21:19.562 ************************************ 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.821 ************************************ 00:21:19.821 START TEST nvmf_timeout 00:21:19.821 ************************************ 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:19.821 * Looking for test storage... 00:21:19.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.821 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:19.822 Cannot find device "nvmf_tgt_br" 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.822 Cannot find device "nvmf_tgt_br2" 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:19.822 Cannot find device "nvmf_tgt_br" 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:19.822 Cannot find device "nvmf_tgt_br2" 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:19.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:19.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:19.822 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:20.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:21:20.086 00:21:20.086 --- 10.0.0.2 ping statistics --- 00:21:20.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.086 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:20.086 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:20.086 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:21:20.086 00:21:20.086 --- 10.0.0.3 ping statistics --- 00:21:20.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.086 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:20.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:21:20.086 00:21:20.086 --- 10.0.0.1 ping statistics --- 00:21:20.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.086 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95626 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95626 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95626 ']' 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.086 12:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:20.086 [2024-07-25 12:28:34.898531] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:21:20.086 [2024-07-25 12:28:34.898653] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.344 [2024-07-25 12:28:35.038546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:20.344 [2024-07-25 12:28:35.165251] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.344 [2024-07-25 12:28:35.165329] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.344 [2024-07-25 12:28:35.165358] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.344 [2024-07-25 12:28:35.165367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.344 [2024-07-25 12:28:35.165374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.344 [2024-07-25 12:28:35.166196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.344 [2024-07-25 12:28:35.166228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.279 12:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.279 12:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:21:21.279 12:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.279 12:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.279 12:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:21.279 12:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.279 12:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:21.279 12:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:21.537 [2024-07-25 12:28:36.246500] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.537 12:28:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:21.796 Malloc0 00:21:21.796 12:28:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.055 12:28:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.313 12:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.571 [2024-07-25 12:28:37.345889] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.571 12:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=95724 00:21:22.571 12:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:22.571 12:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 95724 /var/tmp/bdevperf.sock 00:21:22.571 12:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95724 ']' 00:21:22.571 12:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.571 12:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.571 12:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.571 12:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.571 12:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:22.571 [2024-07-25 12:28:37.432182] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:21:22.571 [2024-07-25 12:28:37.432306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95724 ] 00:21:22.829 [2024-07-25 12:28:37.573844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.087 [2024-07-25 12:28:37.731513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.652 12:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.652 12:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:21:23.652 12:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:23.978 12:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:24.237 NVMe0n1 00:21:24.237 12:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=95772 00:21:24.237 12:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.237 12:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:24.495 Running I/O for 10 seconds... 00:21:25.431 12:28:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.431 [2024-07-25 12:28:40.264090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.264810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.264920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.264987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.265965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.266026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.266085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.266143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.266202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7caa0 is same with the state(5) to be set 00:21:25.431 [2024-07-25 12:28:40.266821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.431 [2024-07-25 12:28:40.266900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.431 [2024-07-25 12:28:40.266941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.431 [2024-07-25 12:28:40.266954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.431 [2024-07-25 12:28:40.266983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.266993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.432 [2024-07-25 12:28:40.267603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.432 [2024-07-25 12:28:40.267886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.432 [2024-07-25 12:28:40.267897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.267907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.267922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.267931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.267943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.267953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.267980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.267990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.268010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.268031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.268053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.268095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.268115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.268137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.433 [2024-07-25 12:28:40.268158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.433 [2024-07-25 12:28:40.268880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.433 [2024-07-25 12:28:40.268892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.268902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.268913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.268923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.268935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.268945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.268957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.268967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.268979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.268989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.434 [2024-07-25 12:28:40.269248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.434 [2024-07-25 12:28:40.269824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.434 [2024-07-25 12:28:40.269833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.435 [2024-07-25 12:28:40.269845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.435 [2024-07-25 12:28:40.269854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.435 [2024-07-25 12:28:40.269865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.435 [2024-07-25 12:28:40.269874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.435 [2024-07-25 12:28:40.269886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.435 [2024-07-25 12:28:40.269896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.435 [2024-07-25 12:28:40.269907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.435 [2024-07-25 12:28:40.269917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.435 [2024-07-25 12:28:40.269929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.435 [2024-07-25 12:28:40.269939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.435 [2024-07-25 12:28:40.269951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.435 [2024-07-25 12:28:40.269961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.435 [2024-07-25 12:28:40.269973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.435 [2024-07-25 12:28:40.269999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.435 [2024-07-25 12:28:40.270012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x653a70 is same with the state(5) to be set 00:21:25.435 [2024-07-25 12:28:40.270037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.435 [2024-07-25 12:28:40.270046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.435 [2024-07-25 12:28:40.270054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:21:25.435 [2024-07-25 12:28:40.270080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.435 [2024-07-25 12:28:40.270146] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x653a70 was disconnected and freed. reset controller. 00:21:25.435 [2024-07-25 12:28:40.270438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.435 [2024-07-25 12:28:40.270552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e6240 (9): Bad file descriptor 00:21:25.435 [2024-07-25 12:28:40.270709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.435 [2024-07-25 12:28:40.270739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e6240 with addr=10.0.0.2, port=4420 00:21:25.435 [2024-07-25 12:28:40.270751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e6240 is same with the state(5) to be set 00:21:25.435 [2024-07-25 12:28:40.270771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e6240 (9): Bad file descriptor 00:21:25.435 [2024-07-25 12:28:40.270807] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.435 [2024-07-25 12:28:40.270819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.435 [2024-07-25 12:28:40.270831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.435 [2024-07-25 12:28:40.270851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.435 [2024-07-25 12:28:40.270864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.693 12:28:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:27.594 [2024-07-25 12:28:42.271009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:27.594 [2024-07-25 12:28:42.271081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e6240 with addr=10.0.0.2, port=4420 00:21:27.594 [2024-07-25 12:28:42.271100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e6240 is same with the state(5) to be set 00:21:27.594 [2024-07-25 12:28:42.271129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e6240 (9): Bad file descriptor 00:21:27.594 [2024-07-25 12:28:42.271150] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:27.594 [2024-07-25 12:28:42.271161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:27.594 [2024-07-25 12:28:42.271172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:27.594 [2024-07-25 12:28:42.271207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:27.594 [2024-07-25 12:28:42.271225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:27.594 12:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:27.594 12:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.594 12:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:27.851 12:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:27.851 12:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:27.851 12:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:27.851 12:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:28.108 12:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:28.108 12:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:29.477 [2024-07-25 12:28:44.271435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:29.477 [2024-07-25 12:28:44.271571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e6240 with addr=10.0.0.2, port=4420 00:21:29.477 [2024-07-25 12:28:44.271590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e6240 is same with the state(5) to be set 00:21:29.477 [2024-07-25 12:28:44.271619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e6240 (9): Bad file descriptor 00:21:29.477 [2024-07-25 12:28:44.271639] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:29.477 [2024-07-25 12:28:44.271650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:29.477 [2024-07-25 12:28:44.271664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.477 [2024-07-25 12:28:44.271701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:29.477 [2024-07-25 12:28:44.271714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:32.004 [2024-07-25 12:28:46.271797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:32.004 [2024-07-25 12:28:46.271895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:32.004 [2024-07-25 12:28:46.271925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:32.004 [2024-07-25 12:28:46.271942] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:32.004 [2024-07-25 12:28:46.271971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:32.570 00:21:32.570 Latency(us) 00:21:32.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.570 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:32.570 Verification LBA range: start 0x0 length 0x4000 00:21:32.570 NVMe0n1 : 8.14 1193.20 4.66 15.73 0.00 105712.80 2368.23 7015926.69 00:21:32.570 =================================================================================================================== 00:21:32.570 Total : 1193.20 4.66 15.73 0.00 105712.80 2368.23 7015926.69 00:21:32.570 0 00:21:33.136 12:28:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:33.136 12:28:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:33.136 12:28:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:33.394 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:33.395 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:33.395 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:33.395 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:33.652 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:33.652 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 95772 00:21:33.652 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 95724 00:21:33.652 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95724 ']' 00:21:33.652 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95724 00:21:33.652 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:21:33.652 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:33.652 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95724 00:21:33.652 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:33.652 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:33.653 killing process with pid 95724 00:21:33.653 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95724' 00:21:33.653 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95724 00:21:33.653 Received shutdown signal, test time was about 9.275471 seconds 00:21:33.653 00:21:33.653 Latency(us) 00:21:33.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.653 =================================================================================================================== 00:21:33.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.653 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95724 00:21:33.910 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.169 [2024-07-25 12:28:48.867404] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.169 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=95930 00:21:34.169 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:34.169 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 95930 /var/tmp/bdevperf.sock 00:21:34.169 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95930 ']' 00:21:34.169 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.169 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.169 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.169 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.169 12:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:34.170 [2024-07-25 12:28:48.936570] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:21:34.170 [2024-07-25 12:28:48.936651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95930 ] 00:21:34.428 [2024-07-25 12:28:49.069679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.428 [2024-07-25 12:28:49.182516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.363 12:28:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.363 12:28:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:21:35.363 12:28:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:35.621 12:28:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:21:35.879 NVMe0n1 00:21:35.879 12:28:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.879 12:28:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=95972 00:21:35.879 12:28:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:21:35.879 Running I/O for 10 seconds... 00:21:36.815 12:28:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.076 [2024-07-25 12:28:51.865240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.076 [2024-07-25 12:28:51.865622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.865993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a8b0 is same with the state(5) to be set 00:21:37.077 [2024-07-25 12:28:51.866278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.077 [2024-07-25 12:28:51.866731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.077 [2024-07-25 12:28:51.866742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.866751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.866762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.866774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.866793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.866810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.866827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.866837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.866849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.866858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.866870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.866879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.866890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.866899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.866910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.866920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.866937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.866954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.866973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.866990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.078 [2024-07-25 12:28:51.867669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.078 [2024-07-25 12:28:51.867693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.078 [2024-07-25 12:28:51.867729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.078 [2024-07-25 12:28:51.867766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.078 [2024-07-25 12:28:51.867778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.078 [2024-07-25 12:28:51.867788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.867799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.867808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.867820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.867833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.867853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.867868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.867879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.867889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.867900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.867909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.867921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.867936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.867953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.867970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.867986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.867995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.868021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.868042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.868073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.868103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.868123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.079 [2024-07-25 12:28:51.868164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.079 [2024-07-25 12:28:51.868838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.079 [2024-07-25 12:28:51.868849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.868858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.868870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.868893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.868914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.868930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.868950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.868960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.868972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.868981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.868993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.869002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.869031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.080 [2024-07-25 12:28:51.869584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.869615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.869635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.869656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.869676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.869697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.080 [2024-07-25 12:28:51.869717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ebb0 is same with the state(5) to be set 00:21:37.080 [2024-07-25 12:28:51.869740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:37.080 [2024-07-25 12:28:51.869748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:37.080 [2024-07-25 12:28:51.869757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:21:37.080 [2024-07-25 12:28:51.869772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.869856] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x190ebb0 was disconnected and freed. reset controller. 00:21:37.080 [2024-07-25 12:28:51.869976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.080 [2024-07-25 12:28:51.870007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.870024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.080 [2024-07-25 12:28:51.870040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.870056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.080 [2024-07-25 12:28:51.870073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.870088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.080 [2024-07-25 12:28:51.870098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.080 [2024-07-25 12:28:51.870123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1240 is same with the state(5) to be set 00:21:37.080 [2024-07-25 12:28:51.870398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.080 [2024-07-25 12:28:51.870446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1240 (9): Bad file descriptor 00:21:37.080 [2024-07-25 12:28:51.870553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.080 [2024-07-25 12:28:51.870593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1240 with addr=10.0.0.2, port=4420 00:21:37.081 [2024-07-25 12:28:51.870613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1240 is same with the state(5) to be set 00:21:37.081 [2024-07-25 12:28:51.870634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1240 (9): Bad file descriptor 00:21:37.081 [2024-07-25 12:28:51.870651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.081 [2024-07-25 12:28:51.870662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.081 [2024-07-25 12:28:51.870676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.081 [2024-07-25 12:28:51.870696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.081 [2024-07-25 12:28:51.880763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.081 12:28:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:38.016 [2024-07-25 12:28:52.880934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.016 [2024-07-25 12:28:52.880993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1240 with addr=10.0.0.2, port=4420 00:21:38.016 [2024-07-25 12:28:52.881011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1240 is same with the state(5) to be set 00:21:38.016 [2024-07-25 12:28:52.881048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1240 (9): Bad file descriptor 00:21:38.016 [2024-07-25 12:28:52.881079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:38.016 [2024-07-25 12:28:52.881102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:38.016 [2024-07-25 12:28:52.881114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:38.016 [2024-07-25 12:28:52.881141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.016 [2024-07-25 12:28:52.881154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:38.274 12:28:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.532 [2024-07-25 12:28:53.163482] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.532 12:28:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 95972 00:21:39.097 [2024-07-25 12:28:53.897171] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:47.234 00:21:47.234 Latency(us) 00:21:47.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.234 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:47.234 Verification LBA range: start 0x0 length 0x4000 00:21:47.234 NVMe0n1 : 10.01 6221.37 24.30 0.00 0.00 20529.16 2040.55 3019898.88 00:21:47.234 =================================================================================================================== 00:21:47.234 Total : 6221.37 24.30 0.00 0.00 20529.16 2040.55 3019898.88 00:21:47.234 0 00:21:47.234 12:29:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96092 00:21:47.235 12:29:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:47.235 12:29:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:47.235 Running I/O for 10 seconds... 00:21:47.235 12:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.235 [2024-07-25 12:29:02.039470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.039948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd31c0 is same with the state(5) to be set 00:21:47.235 [2024-07-25 12:29:02.040906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.040950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.040988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.235 [2024-07-25 12:29:02.041267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-07-25 12:29:02.041276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.236 [2024-07-25 12:29:02.041434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.236 [2024-07-25 12:29:02.041455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.236 [2024-07-25 12:29:02.041475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.236 [2024-07-25 12:29:02.041496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.236 [2024-07-25 12:29:02.041527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.236 [2024-07-25 12:29:02.041560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.041984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.041996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.042005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.042017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.042026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.042037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.042047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.042058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.042068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.042079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.042089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.042106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.042135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.042157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.042172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.042194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.042203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.042219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.042229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.236 [2024-07-25 12:29:02.042240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-07-25 12:29:02.042249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.042982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.042995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.043005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.043016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.043026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.043037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.043046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.043057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.043066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.043077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.043086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.043103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.043119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.043139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.043155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.043173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.043183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.043194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-07-25 12:29:02.043203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.237 [2024-07-25 12:29:02.043214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.238 [2024-07-25 12:29:02.043224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.238 [2024-07-25 12:29:02.043244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.238 [2024-07-25 12:29:02.043272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.238 [2024-07-25 12:29:02.043313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.238 [2024-07-25 12:29:02.043352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.238 [2024-07-25 12:29:02.043381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.238 [2024-07-25 12:29:02.043409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.043985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.043997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.044013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.044032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.044048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.044067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.044084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.044100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.044110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.044121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.044131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.044149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.044165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.044183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.044193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.044213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.044223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.238 [2024-07-25 12:29:02.044234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.238 [2024-07-25 12:29:02.044244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.239 [2024-07-25 12:29:02.044256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.239 [2024-07-25 12:29:02.044270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.239 [2024-07-25 12:29:02.044332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:47.239 [2024-07-25 12:29:02.044361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83072 len:8 PRP1 0x0 PRP2 0x0 00:21:47.239 [2024-07-25 12:29:02.044377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.239 [2024-07-25 12:29:02.044392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:47.239 [2024-07-25 12:29:02.044400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:47.239 [2024-07-25 12:29:02.044409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83080 len:8 PRP1 0x0 PRP2 0x0 00:21:47.239 [2024-07-25 12:29:02.044418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.239 [2024-07-25 12:29:02.044482] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x191fdc0 was disconnected and freed. reset controller. 00:21:47.239 [2024-07-25 12:29:02.044582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.239 [2024-07-25 12:29:02.044597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.239 [2024-07-25 12:29:02.044608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.239 [2024-07-25 12:29:02.044619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.239 [2024-07-25 12:29:02.044629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.239 [2024-07-25 12:29:02.044638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.239 [2024-07-25 12:29:02.044649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.239 [2024-07-25 12:29:02.044658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.239 [2024-07-25 12:29:02.044667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1240 is same with the state(5) to be set 00:21:47.239 [2024-07-25 12:29:02.044952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.239 [2024-07-25 12:29:02.045000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1240 (9): Bad file descriptor 00:21:47.239 [2024-07-25 12:29:02.045156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.239 [2024-07-25 12:29:02.045183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1240 with addr=10.0.0.2, port=4420 00:21:47.239 [2024-07-25 12:29:02.045195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1240 is same with the state(5) to be set 00:21:47.239 [2024-07-25 12:29:02.045245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1240 (9): Bad file descriptor 00:21:47.239 [2024-07-25 12:29:02.045273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.239 [2024-07-25 12:29:02.045284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.239 [2024-07-25 12:29:02.045308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.239 [2024-07-25 12:29:02.045333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.239 [2024-07-25 12:29:02.045346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.239 12:29:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:48.617 [2024-07-25 12:29:03.045524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.617 [2024-07-25 12:29:03.045580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1240 with addr=10.0.0.2, port=4420 00:21:48.617 [2024-07-25 12:29:03.045598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1240 is same with the state(5) to be set 00:21:48.617 [2024-07-25 12:29:03.045625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1240 (9): Bad file descriptor 00:21:48.617 [2024-07-25 12:29:03.045645] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.617 [2024-07-25 12:29:03.045670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.617 [2024-07-25 12:29:03.045682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.617 [2024-07-25 12:29:03.045709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.617 [2024-07-25 12:29:03.045720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.184 [2024-07-25 12:29:04.045890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.184 [2024-07-25 12:29:04.045969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1240 with addr=10.0.0.2, port=4420 00:21:49.184 [2024-07-25 12:29:04.045985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1240 is same with the state(5) to be set 00:21:49.184 [2024-07-25 12:29:04.046011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1240 (9): Bad file descriptor 00:21:49.184 [2024-07-25 12:29:04.046037] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.184 [2024-07-25 12:29:04.046047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.184 [2024-07-25 12:29:04.046059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.184 [2024-07-25 12:29:04.046086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.184 [2024-07-25 12:29:04.046098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.560 [2024-07-25 12:29:05.049920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.560 [2024-07-25 12:29:05.049985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1240 with addr=10.0.0.2, port=4420 00:21:50.560 [2024-07-25 12:29:05.050001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1240 is same with the state(5) to be set 00:21:50.560 [2024-07-25 12:29:05.050255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1240 (9): Bad file descriptor 00:21:50.560 [2024-07-25 12:29:05.050521] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.560 [2024-07-25 12:29:05.050536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.560 [2024-07-25 12:29:05.050548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.560 [2024-07-25 12:29:05.054501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.560 [2024-07-25 12:29:05.054533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.560 12:29:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.560 [2024-07-25 12:29:05.338020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.560 12:29:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96092 00:21:51.496 [2024-07-25 12:29:06.085640] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:56.791 00:21:56.791 Latency(us) 00:21:56.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.791 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:56.791 Verification LBA range: start 0x0 length 0x4000 00:21:56.791 NVMe0n1 : 10.01 5190.06 20.27 3575.24 0.00 14570.21 621.85 3019898.88 00:21:56.791 =================================================================================================================== 00:21:56.791 Total : 5190.06 20.27 3575.24 0.00 14570.21 0.00 3019898.88 00:21:56.791 0 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 95930 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95930 ']' 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95930 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95930 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:56.791 killing process with pid 95930 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95930' 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95930 00:21:56.791 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.791 00:21:56.791 Latency(us) 00:21:56.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.791 =================================================================================================================== 00:21:56.791 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.791 12:29:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95930 00:21:56.791 12:29:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96217 00:21:56.791 12:29:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:56.791 12:29:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96217 /var/tmp/bdevperf.sock 00:21:56.791 12:29:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96217 ']' 00:21:56.791 12:29:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.791 12:29:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.791 12:29:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.791 12:29:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.791 12:29:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:56.791 [2024-07-25 12:29:11.233285] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:21:56.791 [2024-07-25 12:29:11.233405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96217 ] 00:21:56.791 [2024-07-25 12:29:11.370930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.792 [2024-07-25 12:29:11.487424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.767 12:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.767 12:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:21:57.767 12:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96247 00:21:57.767 12:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96217 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:57.767 12:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:57.767 12:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:58.026 NVMe0n1 00:21:58.026 12:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:58.026 12:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96301 00:21:58.026 12:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:58.285 Running I/O for 10 seconds... 00:21:59.221 12:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.482 [2024-07-25 12:29:14.107978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.482 [2024-07-25 12:29:14.108138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.483 [2024-07-25 12:29:14.108704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.108994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6410 is same with the state(5) to be set 00:21:59.484 [2024-07-25 12:29:14.109834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.484 [2024-07-25 12:29:14.109878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.484 [2024-07-25 12:29:14.109900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.484 [2024-07-25 12:29:14.109911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.484 [2024-07-25 12:29:14.109923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.484 [2024-07-25 12:29:14.109934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.484 [2024-07-25 12:29:14.109946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.484 [2024-07-25 12:29:14.109956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.484 [2024-07-25 12:29:14.109967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.484 [2024-07-25 12:29:14.109976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.484 [2024-07-25 12:29:14.109987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.484 [2024-07-25 12:29:14.109996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.484 [2024-07-25 12:29:14.110008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.484 [2024-07-25 12:29:14.110017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.485 [2024-07-25 12:29:14.110836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.485 [2024-07-25 12:29:14.110845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.110864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.110873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.110885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.110894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.110905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.110921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.110940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.110956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.110972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.110982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.110993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.486 [2024-07-25 12:29:14.111635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.486 [2024-07-25 12:29:14.111644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.111985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.111997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.487 [2024-07-25 12:29:14.112426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.487 [2024-07-25 12:29:14.112443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.488 [2024-07-25 12:29:14.112460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.488 [2024-07-25 12:29:14.112495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.112540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13040 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.112550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.112571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.112579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16848 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.112588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.112623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.112638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72792 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.112654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.112677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.112684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82208 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.112719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.112742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.112750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88464 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.112759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.112786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.112794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83960 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.112802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.112825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.112839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17096 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.112853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.112890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.112899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121008 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.112908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.112924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.112932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54000 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.112941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.112958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.112965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9112 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.112974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.112983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.112996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.113009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123856 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.113025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.113041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.113054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.113068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89952 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.113081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.113091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.113098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.113106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.113115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.113124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.113133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.113147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14728 len:8 PRP1 0x0 PRP2 0x0 00:21:59.488 [2024-07-25 12:29:14.113163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.488 [2024-07-25 12:29:14.113179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.488 [2024-07-25 12:29:14.113187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.488 [2024-07-25 12:29:14.113195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48648 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.113205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.113219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.113227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.113236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130776 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.113245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.113254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.113261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.113269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116512 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.113280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.113312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.113330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.113356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72696 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.113367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.113376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.113389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.113406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110992 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.113416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.113425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.113432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.113440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5624 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.113448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.113458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.113465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.113474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58928 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.113489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.128153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.128171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.128182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.128200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.128208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118800 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.128218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.128235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.128243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125272 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.128252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.128269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.128276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114856 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.128285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.128331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.128344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86776 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.128359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.128387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.128396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85160 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.128405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.128421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.128428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4856 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.128439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.128467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.128480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114776 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.128495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.489 [2024-07-25 12:29:14.128512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.489 [2024-07-25 12:29:14.128520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98200 len:8 PRP1 0x0 PRP2 0x0 00:21:59.489 [2024-07-25 12:29:14.128529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128592] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x992a70 was disconnected and freed. reset controller. 00:21:59.489 [2024-07-25 12:29:14.128801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.489 [2024-07-25 12:29:14.128840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.489 [2024-07-25 12:29:14.128865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.489 [2024-07-25 12:29:14.128885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.489 [2024-07-25 12:29:14.128904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.489 [2024-07-25 12:29:14.128913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925240 is same with the state(5) to be set 00:21:59.489 [2024-07-25 12:29:14.129205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:59.489 [2024-07-25 12:29:14.129265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925240 (9): Bad file descriptor 00:21:59.489 [2024-07-25 12:29:14.129432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.489 [2024-07-25 12:29:14.129472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925240 with addr=10.0.0.2, port=4420 00:21:59.489 [2024-07-25 12:29:14.129487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925240 is same with the state(5) to be set 00:21:59.489 [2024-07-25 12:29:14.129507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925240 (9): Bad file descriptor 00:21:59.489 [2024-07-25 12:29:14.129524] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:59.490 [2024-07-25 12:29:14.129534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:59.490 [2024-07-25 12:29:14.129544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:59.490 [2024-07-25 12:29:14.129565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:59.490 [2024-07-25 12:29:14.129576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:59.490 12:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96301 00:22:01.402 [2024-07-25 12:29:16.129911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.402 [2024-07-25 12:29:16.130013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925240 with addr=10.0.0.2, port=4420 00:22:01.403 [2024-07-25 12:29:16.130030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925240 is same with the state(5) to be set 00:22:01.403 [2024-07-25 12:29:16.130059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925240 (9): Bad file descriptor 00:22:01.403 [2024-07-25 12:29:16.130080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.403 [2024-07-25 12:29:16.130090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.403 [2024-07-25 12:29:16.130102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.403 [2024-07-25 12:29:16.130144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.403 [2024-07-25 12:29:16.130157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.306 [2024-07-25 12:29:18.130408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.306 [2024-07-25 12:29:18.130484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925240 with addr=10.0.0.2, port=4420 00:22:03.306 [2024-07-25 12:29:18.130501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925240 is same with the state(5) to be set 00:22:03.306 [2024-07-25 12:29:18.130528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925240 (9): Bad file descriptor 00:22:03.306 [2024-07-25 12:29:18.130549] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.306 [2024-07-25 12:29:18.130559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.306 [2024-07-25 12:29:18.130571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.306 [2024-07-25 12:29:18.130600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.306 [2024-07-25 12:29:18.130612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.879 [2024-07-25 12:29:20.130673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.879 [2024-07-25 12:29:20.130727] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.879 [2024-07-25 12:29:20.130741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.879 [2024-07-25 12:29:20.130752] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:05.879 [2024-07-25 12:29:20.130780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.446 00:22:06.446 Latency(us) 00:22:06.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.446 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:06.446 NVMe0n1 : 8.18 2459.23 9.61 15.65 0.00 51769.23 2412.92 7046430.72 00:22:06.446 =================================================================================================================== 00:22:06.446 Total : 2459.23 9.61 15.65 0.00 51769.23 2412.92 7046430.72 00:22:06.446 0 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:06.446 Attaching 5 probes... 00:22:06.446 1340.328918: reset bdev controller NVMe0 00:22:06.446 1340.478004: reconnect bdev controller NVMe0 00:22:06.446 3340.787087: reconnect delay bdev controller NVMe0 00:22:06.446 3340.837374: reconnect bdev controller NVMe0 00:22:06.446 5341.355818: reconnect delay bdev controller NVMe0 00:22:06.446 5341.398282: reconnect bdev controller NVMe0 00:22:06.446 7341.793906: reconnect delay bdev controller NVMe0 00:22:06.446 7341.817556: reconnect bdev controller NVMe0 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96247 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96217 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96217 ']' 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96217 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.446 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96217 00:22:06.446 killing process with pid 96217 00:22:06.446 Received shutdown signal, test time was about 8.237770 seconds 00:22:06.446 00:22:06.446 Latency(us) 00:22:06.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.447 =================================================================================================================== 00:22:06.447 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.447 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:06.447 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:06.447 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96217' 00:22:06.447 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96217 00:22:06.447 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96217 00:22:06.705 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.963 rmmod nvme_tcp 00:22:06.963 rmmod nvme_fabrics 00:22:06.963 rmmod nvme_keyring 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95626 ']' 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95626 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95626 ']' 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95626 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95626 00:22:06.963 killing process with pid 95626 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95626' 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95626 00:22:06.963 12:29:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95626 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:07.530 00:22:07.530 real 0m47.745s 00:22:07.530 user 2m20.919s 00:22:07.530 sys 0m5.084s 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.530 ************************************ 00:22:07.530 END TEST nvmf_timeout 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:07.530 ************************************ 00:22:07.530 12:29:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:22:07.531 12:29:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:07.531 00:22:07.531 real 5m46.715s 00:22:07.531 user 14m59.230s 00:22:07.531 sys 1m4.952s 00:22:07.531 12:29:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.531 12:29:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.531 ************************************ 00:22:07.531 END TEST nvmf_host 00:22:07.531 ************************************ 00:22:07.531 00:22:07.531 real 15m57.766s 00:22:07.531 user 42m20.519s 00:22:07.531 sys 3m25.548s 00:22:07.531 12:29:22 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.531 12:29:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.531 ************************************ 00:22:07.531 END TEST nvmf_tcp 00:22:07.531 ************************************ 00:22:07.531 12:29:22 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:22:07.531 12:29:22 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:07.531 12:29:22 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:07.531 12:29:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:07.531 12:29:22 -- common/autotest_common.sh@10 -- # set +x 00:22:07.531 ************************************ 00:22:07.531 START TEST spdkcli_nvmf_tcp 00:22:07.531 ************************************ 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:07.531 * Looking for test storage... 00:22:07.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96514 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96514 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 96514 ']' 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.531 12:29:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.789 [2024-07-25 12:29:22.433745] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:22:07.789 [2024-07-25 12:29:22.434471] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96514 ] 00:22:07.789 [2024-07-25 12:29:22.583552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:08.046 [2024-07-25 12:29:22.708104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.046 [2024-07-25 12:29:22.708118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.976 12:29:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:08.976 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:08.976 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:08.976 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:08.976 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:08.976 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:08.976 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:08.976 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:08.976 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:08.976 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:08.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:08.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:22:08.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:08.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:08.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:08.977 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:08.977 ' 00:22:11.540 [2024-07-25 12:29:26.260514] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.911 [2024-07-25 12:29:27.541560] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:22:15.438 [2024-07-25 12:29:29.919213] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:22:17.336 [2024-07-25 12:29:31.948803] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:22:18.707 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:18.707 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:18.707 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:18.707 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:18.707 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:18.707 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:18.707 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:18.707 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:18.707 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:18.707 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:18.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:18.707 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:18.965 12:29:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:18.965 12:29:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:18.965 12:29:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:18.965 12:29:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:18.965 12:29:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:18.965 12:29:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:18.965 12:29:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:22:18.965 12:29:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:22:19.223 12:29:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:19.481 12:29:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:19.481 12:29:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:19.481 12:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:19.481 12:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:19.481 12:29:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:19.481 12:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.481 12:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:19.481 12:29:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:19.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:19.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:19.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:19.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:22:19.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:22:19.481 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:19.481 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:19.481 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:19.481 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:19.481 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:19.481 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:19.481 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:19.481 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:19.481 ' 00:22:24.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:24.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:24.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:24.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:24.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:22:24.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:22:24.775 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:24.775 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:24.775 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:24.775 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:24.775 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:24.775 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:24.775 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:24.775 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:24.776 12:29:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:24.776 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:24.776 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:24.776 12:29:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96514 00:22:24.776 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 96514 ']' 00:22:24.776 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 96514 00:22:24.776 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:22:24.776 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:24.776 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96514 00:22:25.034 killing process with pid 96514 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96514' 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 96514 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 96514 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96514 ']' 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96514 00:22:25.034 Process with pid 96514 is not found 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 96514 ']' 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 96514 00:22:25.034 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (96514) - No such process 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 96514 is not found' 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:25.034 00:22:25.034 real 0m17.630s 00:22:25.034 user 0m38.077s 00:22:25.034 sys 0m1.012s 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.034 12:29:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:25.034 ************************************ 00:22:25.034 END TEST spdkcli_nvmf_tcp 00:22:25.034 ************************************ 00:22:25.293 12:29:39 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:25.293 12:29:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:25.293 12:29:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.293 12:29:39 -- common/autotest_common.sh@10 -- # set +x 00:22:25.293 ************************************ 00:22:25.293 START TEST nvmf_identify_passthru 00:22:25.293 ************************************ 00:22:25.293 12:29:39 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:25.293 * Looking for test storage... 00:22:25.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:25.293 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:25.293 12:29:40 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.293 12:29:40 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.293 12:29:40 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.293 12:29:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.293 12:29:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.293 12:29:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.293 12:29:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:25.293 12:29:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.293 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:25.293 12:29:40 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.293 12:29:40 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.293 12:29:40 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.293 12:29:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.293 12:29:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.293 12:29:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.293 12:29:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:25.293 12:29:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.293 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.293 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:25.293 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:25.293 Cannot find device "nvmf_tgt_br" 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:25.293 Cannot find device "nvmf_tgt_br2" 00:22:25.293 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:22:25.294 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:25.294 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:25.294 Cannot find device "nvmf_tgt_br" 00:22:25.294 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:22:25.294 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:25.294 Cannot find device "nvmf_tgt_br2" 00:22:25.294 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:22:25.294 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:25.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:25.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:25.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:22:25.552 00:22:25.552 --- 10.0.0.2 ping statistics --- 00:22:25.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.552 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:25.552 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:25.552 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:22:25.552 00:22:25.552 --- 10.0.0.3 ping statistics --- 00:22:25.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.552 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:25.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:25.552 00:22:25.552 --- 10.0.0.1 ping statistics --- 00:22:25.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.552 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:25.552 12:29:40 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:25.552 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:25.552 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:22:25.552 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:25.811 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:22:25.811 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:25.811 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:22:25.811 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:22:25.811 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:22:25.811 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:25.811 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:22:25.811 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:22:25.811 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:22:25.811 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:25.811 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:22:25.811 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:22:26.069 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:22:26.069 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:22:26.069 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.069 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:26.069 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:22:26.069 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.069 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:26.069 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97019 00:22:26.069 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.069 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:26.069 12:29:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97019 00:22:26.069 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 97019 ']' 00:22:26.069 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.069 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.069 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.069 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.069 12:29:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:26.327 [2024-07-25 12:29:40.952929] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:22:26.327 [2024-07-25 12:29:40.953052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.327 [2024-07-25 12:29:41.090582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.586 [2024-07-25 12:29:41.210142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.586 [2024-07-25 12:29:41.210216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.586 [2024-07-25 12:29:41.210228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.586 [2024-07-25 12:29:41.210237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.586 [2024-07-25 12:29:41.210244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.586 [2024-07-25 12:29:41.210391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.586 [2024-07-25 12:29:41.210867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.586 [2024-07-25 12:29:41.211508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.586 [2024-07-25 12:29:41.211503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.152 12:29:41 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.152 12:29:41 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:22:27.152 12:29:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:22:27.152 12:29:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.152 12:29:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:27.152 12:29:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.152 12:29:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:22:27.152 12:29:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.152 12:29:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:27.410 [2024-07-25 12:29:42.020427] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:22:27.410 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.410 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:27.410 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.410 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:27.410 [2024-07-25 12:29:42.034413] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.410 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.410 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:22:27.410 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.410 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:27.410 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:27.411 Nvme0n1 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.411 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.411 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.411 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:27.411 [2024-07-25 12:29:42.176606] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.411 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:27.411 [ 00:22:27.411 { 00:22:27.411 "allow_any_host": true, 00:22:27.411 "hosts": [], 00:22:27.411 "listen_addresses": [], 00:22:27.411 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:27.411 "subtype": "Discovery" 00:22:27.411 }, 00:22:27.411 { 00:22:27.411 "allow_any_host": true, 00:22:27.411 "hosts": [], 00:22:27.411 "listen_addresses": [ 00:22:27.411 { 00:22:27.411 "adrfam": "IPv4", 00:22:27.411 "traddr": "10.0.0.2", 00:22:27.411 "trsvcid": "4420", 00:22:27.411 "trtype": "TCP" 00:22:27.411 } 00:22:27.411 ], 00:22:27.411 "max_cntlid": 65519, 00:22:27.411 "max_namespaces": 1, 00:22:27.411 "min_cntlid": 1, 00:22:27.411 "model_number": "SPDK bdev Controller", 00:22:27.411 "namespaces": [ 00:22:27.411 { 00:22:27.411 "bdev_name": "Nvme0n1", 00:22:27.411 "name": "Nvme0n1", 00:22:27.411 "nguid": "84A9B5FC569043CA8EC375654097B1A9", 00:22:27.411 "nsid": 1, 00:22:27.411 "uuid": "84a9b5fc-5690-43ca-8ec3-75654097b1a9" 00:22:27.411 } 00:22:27.411 ], 00:22:27.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.411 "serial_number": "SPDK00000000000001", 00:22:27.411 "subtype": "NVMe" 00:22:27.411 } 00:22:27.411 ] 00:22:27.411 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.411 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:27.411 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:22:27.411 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:22:27.669 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:22:27.669 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:22:27.669 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:27.669 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:22:27.927 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:22:27.927 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:22:27.927 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:22:27.927 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.927 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.928 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:22:27.928 12:29:42 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.928 rmmod nvme_tcp 00:22:27.928 rmmod nvme_fabrics 00:22:27.928 rmmod nvme_keyring 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97019 ']' 00:22:27.928 12:29:42 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97019 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 97019 ']' 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 97019 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97019 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:27.928 killing process with pid 97019 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97019' 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 97019 00:22:27.928 12:29:42 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 97019 00:22:28.187 12:29:43 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.187 12:29:43 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.187 12:29:43 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.187 12:29:43 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.187 12:29:43 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.187 12:29:43 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.187 12:29:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:28.187 12:29:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.187 12:29:43 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:28.446 ************************************ 00:22:28.446 END TEST nvmf_identify_passthru 00:22:28.446 ************************************ 00:22:28.446 00:22:28.446 real 0m3.104s 00:22:28.446 user 0m7.572s 00:22:28.446 sys 0m0.834s 00:22:28.446 12:29:43 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:28.446 12:29:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:28.446 12:29:43 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:28.446 12:29:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:28.446 12:29:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.446 12:29:43 -- common/autotest_common.sh@10 -- # set +x 00:22:28.446 ************************************ 00:22:28.446 START TEST nvmf_dif 00:22:28.446 ************************************ 00:22:28.446 12:29:43 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:28.446 * Looking for test storage... 00:22:28.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:28.446 12:29:43 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:28.446 12:29:43 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.446 12:29:43 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.446 12:29:43 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.446 12:29:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.446 12:29:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.446 12:29:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.446 12:29:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:28.446 12:29:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.446 12:29:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:28.446 12:29:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:28.446 12:29:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:28.446 12:29:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:28.446 12:29:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.446 12:29:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:28.446 12:29:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:28.446 Cannot find device "nvmf_tgt_br" 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@155 -- # true 00:22:28.446 12:29:43 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:28.446 Cannot find device "nvmf_tgt_br2" 00:22:28.447 12:29:43 nvmf_dif -- nvmf/common.sh@156 -- # true 00:22:28.447 12:29:43 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:28.447 12:29:43 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:28.447 Cannot find device "nvmf_tgt_br" 00:22:28.447 12:29:43 nvmf_dif -- nvmf/common.sh@158 -- # true 00:22:28.447 12:29:43 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:28.447 Cannot find device "nvmf_tgt_br2" 00:22:28.447 12:29:43 nvmf_dif -- nvmf/common.sh@159 -- # true 00:22:28.447 12:29:43 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:28.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:28.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:28.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:22:28.708 00:22:28.708 --- 10.0.0.2 ping statistics --- 00:22:28.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.708 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:28.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:28.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:22:28.708 00:22:28.708 --- 10.0.0.3 ping statistics --- 00:22:28.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.708 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:28.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:28.708 00:22:28.708 --- 10.0.0.1 ping statistics --- 00:22:28.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.708 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:28.708 12:29:43 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:29.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:29.275 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:29.275 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:29.275 12:29:43 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.275 12:29:43 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:29.275 12:29:43 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:29.275 12:29:43 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.275 12:29:43 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:29.275 12:29:43 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:29.275 12:29:43 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:29.275 12:29:43 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:29.275 12:29:43 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.275 12:29:43 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:29.275 12:29:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:29.275 12:29:43 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97362 00:22:29.275 12:29:43 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:29.275 12:29:43 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97362 00:22:29.275 12:29:43 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 97362 ']' 00:22:29.275 12:29:43 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.275 12:29:43 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.275 12:29:43 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.275 12:29:43 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.275 12:29:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:29.275 [2024-07-25 12:29:44.059097] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:22:29.275 [2024-07-25 12:29:44.059222] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.534 [2024-07-25 12:29:44.200742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.534 [2024-07-25 12:29:44.331233] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.534 [2024-07-25 12:29:44.331340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.534 [2024-07-25 12:29:44.331356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.534 [2024-07-25 12:29:44.331367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.534 [2024-07-25 12:29:44.331376] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.534 [2024-07-25 12:29:44.331410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.482 12:29:45 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:30.482 12:29:45 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:22:30.482 12:29:45 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.482 12:29:45 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:30.482 12:29:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:30.482 12:29:45 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.482 12:29:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:30.482 12:29:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:30.482 12:29:45 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.482 12:29:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:30.482 [2024-07-25 12:29:45.065995] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.482 12:29:45 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.482 12:29:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:30.482 12:29:45 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:30.482 12:29:45 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.482 12:29:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:30.482 ************************************ 00:22:30.482 START TEST fio_dif_1_default 00:22:30.482 ************************************ 00:22:30.482 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:22:30.482 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:30.482 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:30.482 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:30.482 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:30.482 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:30.483 bdev_null0 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:30.483 [2024-07-25 12:29:45.110092] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.483 { 00:22:30.483 "params": { 00:22:30.483 "name": "Nvme$subsystem", 00:22:30.483 "trtype": "$TEST_TRANSPORT", 00:22:30.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.483 "adrfam": "ipv4", 00:22:30.483 "trsvcid": "$NVMF_PORT", 00:22:30.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.483 "hdgst": ${hdgst:-false}, 00:22:30.483 "ddgst": ${ddgst:-false} 00:22:30.483 }, 00:22:30.483 "method": "bdev_nvme_attach_controller" 00:22:30.483 } 00:22:30.483 EOF 00:22:30.483 )") 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:30.483 "params": { 00:22:30.483 "name": "Nvme0", 00:22:30.483 "trtype": "tcp", 00:22:30.483 "traddr": "10.0.0.2", 00:22:30.483 "adrfam": "ipv4", 00:22:30.483 "trsvcid": "4420", 00:22:30.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:30.483 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:30.483 "hdgst": false, 00:22:30.483 "ddgst": false 00:22:30.483 }, 00:22:30.483 "method": "bdev_nvme_attach_controller" 00:22:30.483 }' 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:30.483 12:29:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:30.483 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:30.483 fio-3.35 00:22:30.483 Starting 1 thread 00:22:42.692 00:22:42.692 filename0: (groupid=0, jobs=1): err= 0: pid=97445: Thu Jul 25 12:29:55 2024 00:22:42.692 read: IOPS=1370, BW=5482KiB/s (5613kB/s)(53.6MiB/10020msec) 00:22:42.692 slat (nsec): min=7512, max=52695, avg=8753.90, stdev=3368.02 00:22:42.692 clat (usec): min=440, max=41546, avg=2892.12, stdev=9608.25 00:22:42.692 lat (usec): min=448, max=41558, avg=2900.87, stdev=9608.20 00:22:42.692 clat percentiles (usec): 00:22:42.692 | 1.00th=[ 445], 5.00th=[ 449], 10.00th=[ 457], 20.00th=[ 461], 00:22:42.692 | 30.00th=[ 465], 40.00th=[ 469], 50.00th=[ 474], 60.00th=[ 482], 00:22:42.692 | 70.00th=[ 486], 80.00th=[ 490], 90.00th=[ 519], 95.00th=[40633], 00:22:42.692 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:22:42.692 | 99.99th=[41681] 00:22:42.692 bw ( KiB/s): min= 1920, max= 7328, per=100.00%, avg=5491.20, stdev=1268.81, samples=20 00:22:42.692 iops : min= 480, max= 1832, avg=1372.80, stdev=317.20, samples=20 00:22:42.692 lat (usec) : 500=86.49%, 750=7.54% 00:22:42.692 lat (msec) : 10=0.03%, 50=5.94% 00:22:42.692 cpu : usr=91.38%, sys=7.79%, ctx=15, majf=0, minf=9 00:22:42.692 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:42.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.692 issued rwts: total=13732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.692 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:42.692 00:22:42.692 Run status group 0 (all jobs): 00:22:42.692 READ: bw=5482KiB/s (5613kB/s), 5482KiB/s-5482KiB/s (5613kB/s-5613kB/s), io=53.6MiB (56.2MB), run=10020-10020msec 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.692 00:22:42.692 real 0m11.046s 00:22:42.692 user 0m9.824s 00:22:42.692 sys 0m1.049s 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.692 ************************************ 00:22:42.692 END TEST fio_dif_1_default 00:22:42.692 ************************************ 00:22:42.692 12:29:56 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:42.692 12:29:56 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:42.692 12:29:56 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:42.692 12:29:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:42.692 ************************************ 00:22:42.692 START TEST fio_dif_1_multi_subsystems 00:22:42.692 ************************************ 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:42.692 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:42.693 bdev_null0 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:42.693 [2024-07-25 12:29:56.209075] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:42.693 bdev_null1 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.693 { 00:22:42.693 "params": { 00:22:42.693 "name": "Nvme$subsystem", 00:22:42.693 "trtype": "$TEST_TRANSPORT", 00:22:42.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.693 "adrfam": "ipv4", 00:22:42.693 "trsvcid": "$NVMF_PORT", 00:22:42.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.693 "hdgst": ${hdgst:-false}, 00:22:42.693 "ddgst": ${ddgst:-false} 00:22:42.693 }, 00:22:42.693 "method": "bdev_nvme_attach_controller" 00:22:42.693 } 00:22:42.693 EOF 00:22:42.693 )") 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.693 { 00:22:42.693 "params": { 00:22:42.693 "name": "Nvme$subsystem", 00:22:42.693 "trtype": "$TEST_TRANSPORT", 00:22:42.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.693 "adrfam": "ipv4", 00:22:42.693 "trsvcid": "$NVMF_PORT", 00:22:42.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.693 "hdgst": ${hdgst:-false}, 00:22:42.693 "ddgst": ${ddgst:-false} 00:22:42.693 }, 00:22:42.693 "method": "bdev_nvme_attach_controller" 00:22:42.693 } 00:22:42.693 EOF 00:22:42.693 )") 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:42.693 "params": { 00:22:42.693 "name": "Nvme0", 00:22:42.693 "trtype": "tcp", 00:22:42.693 "traddr": "10.0.0.2", 00:22:42.693 "adrfam": "ipv4", 00:22:42.693 "trsvcid": "4420", 00:22:42.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:42.693 "hdgst": false, 00:22:42.693 "ddgst": false 00:22:42.693 }, 00:22:42.693 "method": "bdev_nvme_attach_controller" 00:22:42.693 },{ 00:22:42.693 "params": { 00:22:42.693 "name": "Nvme1", 00:22:42.693 "trtype": "tcp", 00:22:42.693 "traddr": "10.0.0.2", 00:22:42.693 "adrfam": "ipv4", 00:22:42.693 "trsvcid": "4420", 00:22:42.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.693 "hdgst": false, 00:22:42.693 "ddgst": false 00:22:42.693 }, 00:22:42.693 "method": "bdev_nvme_attach_controller" 00:22:42.693 }' 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:42.693 12:29:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:42.693 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:42.694 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:42.694 fio-3.35 00:22:42.694 Starting 2 threads 00:22:52.659 00:22:52.659 filename0: (groupid=0, jobs=1): err= 0: pid=97604: Thu Jul 25 12:30:07 2024 00:22:52.659 read: IOPS=278, BW=1116KiB/s (1143kB/s)(10.9MiB/10038msec) 00:22:52.659 slat (nsec): min=7612, max=65532, avg=10602.88, stdev=6171.15 00:22:52.659 clat (usec): min=452, max=42484, avg=14305.51, stdev=19182.17 00:22:52.659 lat (usec): min=460, max=42494, avg=14316.11, stdev=19182.00 00:22:52.659 clat percentiles (usec): 00:22:52.659 | 1.00th=[ 457], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 490], 00:22:52.659 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 807], 60.00th=[ 1090], 00:22:52.659 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:22:52.659 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:22:52.659 | 99.99th=[42730] 00:22:52.659 bw ( KiB/s): min= 608, max= 2784, per=50.90%, avg=1118.40, stdev=466.91, samples=20 00:22:52.659 iops : min= 152, max= 696, avg=279.60, stdev=116.73, samples=20 00:22:52.659 lat (usec) : 500=28.50%, 750=20.21%, 1000=9.54% 00:22:52.659 lat (msec) : 2=8.04%, 50=33.71% 00:22:52.659 cpu : usr=95.17%, sys=4.32%, ctx=111, majf=0, minf=9 00:22:52.659 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:52.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.659 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.659 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:52.659 filename1: (groupid=0, jobs=1): err= 0: pid=97605: Thu Jul 25 12:30:07 2024 00:22:52.659 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10037msec) 00:22:52.659 slat (nsec): min=7596, max=66169, avg=11564.71, stdev=7959.49 00:22:52.659 clat (usec): min=450, max=42122, avg=14765.45, stdev=19327.22 00:22:52.659 lat (usec): min=458, max=42160, avg=14777.01, stdev=19326.87 00:22:52.659 clat percentiles (usec): 00:22:52.659 | 1.00th=[ 461], 5.00th=[ 474], 10.00th=[ 486], 20.00th=[ 502], 00:22:52.659 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 824], 60.00th=[ 1123], 00:22:52.659 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:22:52.659 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:22:52.659 | 99.99th=[42206] 00:22:52.659 bw ( KiB/s): min= 672, max= 2912, per=49.31%, avg=1083.20, stdev=479.53, samples=20 00:22:52.659 iops : min= 168, max= 728, avg=270.80, stdev=119.88, samples=20 00:22:52.659 lat (usec) : 500=18.92%, 750=28.47%, 1000=9.40% 00:22:52.659 lat (msec) : 2=8.41%, 50=34.81% 00:22:52.659 cpu : usr=95.17%, sys=4.31%, ctx=64, majf=0, minf=0 00:22:52.659 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:52.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.659 issued rwts: total=2712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.659 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:52.659 00:22:52.659 Run status group 0 (all jobs): 00:22:52.659 READ: bw=2196KiB/s (2249kB/s), 1081KiB/s-1116KiB/s (1107kB/s-1143kB/s), io=21.5MiB (22.6MB), run=10037-10038msec 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.659 00:22:52.659 real 0m11.253s 00:22:52.659 user 0m19.970s 00:22:52.659 sys 0m1.164s 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:52.659 12:30:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:52.659 ************************************ 00:22:52.659 END TEST fio_dif_1_multi_subsystems 00:22:52.659 ************************************ 00:22:52.659 12:30:07 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:52.659 12:30:07 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:52.659 12:30:07 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:52.659 12:30:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:52.659 ************************************ 00:22:52.659 START TEST fio_dif_rand_params 00:22:52.659 ************************************ 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:52.659 bdev_null0 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:52.659 [2024-07-25 12:30:07.510108] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:52.659 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.660 { 00:22:52.660 "params": { 00:22:52.660 "name": "Nvme$subsystem", 00:22:52.660 "trtype": "$TEST_TRANSPORT", 00:22:52.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.660 "adrfam": "ipv4", 00:22:52.660 "trsvcid": "$NVMF_PORT", 00:22:52.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.660 "hdgst": ${hdgst:-false}, 00:22:52.660 "ddgst": ${ddgst:-false} 00:22:52.660 }, 00:22:52.660 "method": "bdev_nvme_attach_controller" 00:22:52.660 } 00:22:52.660 EOF 00:22:52.660 )") 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:52.660 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:52.918 "params": { 00:22:52.918 "name": "Nvme0", 00:22:52.918 "trtype": "tcp", 00:22:52.918 "traddr": "10.0.0.2", 00:22:52.918 "adrfam": "ipv4", 00:22:52.918 "trsvcid": "4420", 00:22:52.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:52.918 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:52.918 "hdgst": false, 00:22:52.918 "ddgst": false 00:22:52.918 }, 00:22:52.918 "method": "bdev_nvme_attach_controller" 00:22:52.918 }' 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:52.918 12:30:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:52.918 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:52.918 ... 00:22:52.918 fio-3.35 00:22:52.918 Starting 3 threads 00:22:59.476 00:22:59.476 filename0: (groupid=0, jobs=1): err= 0: pid=97757: Thu Jul 25 12:30:13 2024 00:22:59.476 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(156MiB/5003msec) 00:22:59.476 slat (nsec): min=7699, max=42472, avg=14056.11, stdev=5061.21 00:22:59.476 clat (usec): min=6035, max=52581, avg=11986.26, stdev=3153.80 00:22:59.476 lat (usec): min=6047, max=52595, avg=12000.31, stdev=3154.05 00:22:59.476 clat percentiles (usec): 00:22:59.476 | 1.00th=[ 6849], 5.00th=[ 7832], 10.00th=[ 9634], 20.00th=[10945], 00:22:59.476 | 30.00th=[11469], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:22:59.476 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[13960], 00:22:59.476 | 99.00th=[16450], 99.50th=[19792], 99.90th=[50594], 99.95th=[52691], 00:22:59.476 | 99.99th=[52691] 00:22:59.476 bw ( KiB/s): min=29440, max=35768, per=35.03%, avg=32048.89, stdev=2163.38, samples=9 00:22:59.476 iops : min= 230, max= 279, avg=250.33, stdev=16.81, samples=9 00:22:59.476 lat (msec) : 10=10.72%, 20=88.80%, 50=0.24%, 100=0.24% 00:22:59.476 cpu : usr=92.18%, sys=6.00%, ctx=37, majf=0, minf=0 00:22:59.476 IO depths : 1=3.1%, 2=96.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:59.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.476 issued rwts: total=1250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:59.476 filename0: (groupid=0, jobs=1): err= 0: pid=97758: Thu Jul 25 12:30:13 2024 00:22:59.476 read: IOPS=256, BW=32.1MiB/s (33.7MB/s)(161MiB/5005msec) 00:22:59.476 slat (nsec): min=5607, max=67057, avg=14066.18, stdev=4712.60 00:22:59.476 clat (usec): min=5479, max=52500, avg=11652.85, stdev=5580.96 00:22:59.476 lat (usec): min=5492, max=52519, avg=11666.91, stdev=5581.40 00:22:59.476 clat percentiles (usec): 00:22:59.476 | 1.00th=[ 7701], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:22:59.476 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:22:59.476 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12649], 00:22:59.476 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52691], 00:22:59.476 | 99.99th=[52691] 00:22:59.476 bw ( KiB/s): min=23040, max=36096, per=35.93%, avg=32870.40, stdev=3925.70, samples=10 00:22:59.476 iops : min= 180, max= 282, avg=256.80, stdev=30.67, samples=10 00:22:59.476 lat (msec) : 10=13.69%, 20=84.45%, 100=1.87% 00:22:59.476 cpu : usr=92.15%, sys=6.06%, ctx=13, majf=0, minf=0 00:22:59.476 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:59.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.476 issued rwts: total=1286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:59.476 filename0: (groupid=0, jobs=1): err= 0: pid=97759: Thu Jul 25 12:30:13 2024 00:22:59.476 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(130MiB/5001msec) 00:22:59.476 slat (usec): min=7, max=112, avg=12.66, stdev= 7.00 00:22:59.476 clat (usec): min=4201, max=24312, avg=14378.96, stdev=2462.76 00:22:59.476 lat (usec): min=4213, max=24321, avg=14391.62, stdev=2462.75 00:22:59.476 clat percentiles (usec): 00:22:59.476 | 1.00th=[ 4293], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[13698], 00:22:59.476 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15008], 60.00th=[15270], 00:22:59.476 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16319], 95.00th=[16581], 00:22:59.476 | 99.00th=[18744], 99.50th=[19530], 99.90th=[24249], 99.95th=[24249], 00:22:59.476 | 99.99th=[24249] 00:22:59.476 bw ( KiB/s): min=24576, max=32256, per=29.20%, avg=26709.33, stdev=2867.89, samples=9 00:22:59.476 iops : min= 192, max= 252, avg=208.67, stdev=22.41, samples=9 00:22:59.476 lat (msec) : 10=9.03%, 20=90.68%, 50=0.29% 00:22:59.476 cpu : usr=91.84%, sys=6.36%, ctx=50, majf=0, minf=0 00:22:59.476 IO depths : 1=29.5%, 2=70.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:59.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.476 issued rwts: total=1041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:59.476 00:22:59.476 Run status group 0 (all jobs): 00:22:59.476 READ: bw=89.3MiB/s (93.7MB/s), 26.0MiB/s-32.1MiB/s (27.3MB/s-33.7MB/s), io=447MiB (469MB), run=5001-5005msec 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.476 bdev_null0 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:59.476 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 [2024-07-25 12:30:13.588656] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 bdev_null1 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 bdev_null2 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:59.477 { 00:22:59.477 "params": { 00:22:59.477 "name": "Nvme$subsystem", 00:22:59.477 "trtype": "$TEST_TRANSPORT", 00:22:59.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:59.477 "adrfam": "ipv4", 00:22:59.477 "trsvcid": "$NVMF_PORT", 00:22:59.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:59.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:59.477 "hdgst": ${hdgst:-false}, 00:22:59.477 "ddgst": ${ddgst:-false} 00:22:59.477 }, 00:22:59.477 "method": "bdev_nvme_attach_controller" 00:22:59.477 } 00:22:59.477 EOF 00:22:59.477 )") 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:59.477 { 00:22:59.477 "params": { 00:22:59.477 "name": "Nvme$subsystem", 00:22:59.477 "trtype": "$TEST_TRANSPORT", 00:22:59.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:59.477 "adrfam": "ipv4", 00:22:59.477 "trsvcid": "$NVMF_PORT", 00:22:59.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:59.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:59.477 "hdgst": ${hdgst:-false}, 00:22:59.477 "ddgst": ${ddgst:-false} 00:22:59.477 }, 00:22:59.477 "method": "bdev_nvme_attach_controller" 00:22:59.477 } 00:22:59.477 EOF 00:22:59.477 )") 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:59.477 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:59.478 { 00:22:59.478 "params": { 00:22:59.478 "name": "Nvme$subsystem", 00:22:59.478 "trtype": "$TEST_TRANSPORT", 00:22:59.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:59.478 "adrfam": "ipv4", 00:22:59.478 "trsvcid": "$NVMF_PORT", 00:22:59.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:59.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:59.478 "hdgst": ${hdgst:-false}, 00:22:59.478 "ddgst": ${ddgst:-false} 00:22:59.478 }, 00:22:59.478 "method": "bdev_nvme_attach_controller" 00:22:59.478 } 00:22:59.478 EOF 00:22:59.478 )") 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:59.478 "params": { 00:22:59.478 "name": "Nvme0", 00:22:59.478 "trtype": "tcp", 00:22:59.478 "traddr": "10.0.0.2", 00:22:59.478 "adrfam": "ipv4", 00:22:59.478 "trsvcid": "4420", 00:22:59.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:59.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:59.478 "hdgst": false, 00:22:59.478 "ddgst": false 00:22:59.478 }, 00:22:59.478 "method": "bdev_nvme_attach_controller" 00:22:59.478 },{ 00:22:59.478 "params": { 00:22:59.478 "name": "Nvme1", 00:22:59.478 "trtype": "tcp", 00:22:59.478 "traddr": "10.0.0.2", 00:22:59.478 "adrfam": "ipv4", 00:22:59.478 "trsvcid": "4420", 00:22:59.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.478 "hdgst": false, 00:22:59.478 "ddgst": false 00:22:59.478 }, 00:22:59.478 "method": "bdev_nvme_attach_controller" 00:22:59.478 },{ 00:22:59.478 "params": { 00:22:59.478 "name": "Nvme2", 00:22:59.478 "trtype": "tcp", 00:22:59.478 "traddr": "10.0.0.2", 00:22:59.478 "adrfam": "ipv4", 00:22:59.478 "trsvcid": "4420", 00:22:59.478 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:59.478 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:59.478 "hdgst": false, 00:22:59.478 "ddgst": false 00:22:59.478 }, 00:22:59.478 "method": "bdev_nvme_attach_controller" 00:22:59.478 }' 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:59.478 12:30:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:59.478 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:59.478 ... 00:22:59.478 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:59.478 ... 00:22:59.478 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:59.478 ... 00:22:59.478 fio-3.35 00:22:59.478 Starting 24 threads 00:23:11.683 00:23:11.683 filename0: (groupid=0, jobs=1): err= 0: pid=97860: Thu Jul 25 12:30:24 2024 00:23:11.683 read: IOPS=199, BW=798KiB/s (817kB/s)(8016KiB/10041msec) 00:23:11.683 slat (usec): min=6, max=8023, avg=23.98, stdev=283.07 00:23:11.683 clat (msec): min=24, max=175, avg=79.90, stdev=25.65 00:23:11.683 lat (msec): min=24, max=175, avg=79.92, stdev=25.65 00:23:11.683 clat percentiles (msec): 00:23:11.683 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 59], 00:23:11.683 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:23:11.683 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 116], 95.00th=[ 129], 00:23:11.683 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 176], 00:23:11.683 | 99.99th=[ 176] 00:23:11.683 bw ( KiB/s): min= 552, max= 992, per=4.20%, avg=795.05, stdev=152.87, samples=20 00:23:11.683 iops : min= 138, max= 248, avg=198.75, stdev=38.23, samples=20 00:23:11.683 lat (msec) : 50=12.77%, 100=67.56%, 250=19.66% 00:23:11.683 cpu : usr=36.83%, sys=1.09%, ctx=1039, majf=0, minf=9 00:23:11.683 IO depths : 1=1.5%, 2=3.2%, 4=11.3%, 8=71.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:23:11.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.683 filename0: (groupid=0, jobs=1): err= 0: pid=97861: Thu Jul 25 12:30:24 2024 00:23:11.683 read: IOPS=185, BW=741KiB/s (759kB/s)(7412KiB/10001msec) 00:23:11.683 slat (usec): min=4, max=12023, avg=17.09, stdev=279.09 00:23:11.683 clat (msec): min=3, max=182, avg=86.24, stdev=28.35 00:23:11.683 lat (msec): min=3, max=182, avg=86.26, stdev=28.36 00:23:11.683 clat percentiles (msec): 00:23:11.683 | 1.00th=[ 6], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 69], 00:23:11.683 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 93], 00:23:11.683 | 70.00th=[ 97], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 142], 00:23:11.683 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 182], 99.95th=[ 182], 00:23:11.683 | 99.99th=[ 182] 00:23:11.683 bw ( KiB/s): min= 555, max= 1024, per=3.79%, avg=718.74, stdev=142.64, samples=19 00:23:11.683 iops : min= 138, max= 256, avg=179.58, stdev=35.70, samples=19 00:23:11.683 lat (msec) : 4=0.59%, 10=1.13%, 20=0.86%, 50=5.23%, 100=64.49% 00:23:11.683 lat (msec) : 250=27.68% 00:23:11.683 cpu : usr=32.20%, sys=0.92%, ctx=864, majf=0, minf=9 00:23:11.683 IO depths : 1=1.8%, 2=4.0%, 4=12.7%, 8=69.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:23:11.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 issued rwts: total=1853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.683 filename0: (groupid=0, jobs=1): err= 0: pid=97862: Thu Jul 25 12:30:24 2024 00:23:11.683 read: IOPS=197, BW=789KiB/s (808kB/s)(7912KiB/10026msec) 00:23:11.683 slat (usec): min=5, max=9017, avg=25.88, stdev=312.59 00:23:11.683 clat (msec): min=30, max=200, avg=80.88, stdev=29.59 00:23:11.683 lat (msec): min=30, max=200, avg=80.90, stdev=29.59 00:23:11.683 clat percentiles (msec): 00:23:11.683 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:23:11.683 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 82], 00:23:11.683 | 70.00th=[ 95], 80.00th=[ 112], 90.00th=[ 123], 95.00th=[ 130], 00:23:11.683 | 99.00th=[ 165], 99.50th=[ 180], 99.90th=[ 201], 99.95th=[ 201], 00:23:11.683 | 99.99th=[ 201] 00:23:11.683 bw ( KiB/s): min= 384, max= 1168, per=4.16%, avg=787.00, stdev=203.36, samples=20 00:23:11.683 iops : min= 96, max= 292, avg=196.75, stdev=50.84, samples=20 00:23:11.683 lat (msec) : 50=16.43%, 100=58.49%, 250=25.08% 00:23:11.683 cpu : usr=37.93%, sys=1.29%, ctx=1397, majf=0, minf=9 00:23:11.683 IO depths : 1=1.6%, 2=3.2%, 4=10.6%, 8=72.5%, 16=12.0%, 32=0.0%, >=64=0.0% 00:23:11.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 issued rwts: total=1978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.683 filename0: (groupid=0, jobs=1): err= 0: pid=97863: Thu Jul 25 12:30:24 2024 00:23:11.683 read: IOPS=179, BW=717KiB/s (734kB/s)(7168KiB/10002msec) 00:23:11.683 slat (usec): min=3, max=8038, avg=22.05, stdev=250.66 00:23:11.683 clat (usec): min=1499, max=200093, avg=89074.85, stdev=34940.67 00:23:11.683 lat (usec): min=1506, max=200114, avg=89096.90, stdev=34946.46 00:23:11.683 clat percentiles (msec): 00:23:11.683 | 1.00th=[ 4], 5.00th=[ 40], 10.00th=[ 51], 20.00th=[ 69], 00:23:11.683 | 30.00th=[ 74], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 95], 00:23:11.683 | 70.00th=[ 106], 80.00th=[ 115], 90.00th=[ 129], 95.00th=[ 153], 00:23:11.683 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 201], 00:23:11.683 | 99.99th=[ 201] 00:23:11.683 bw ( KiB/s): min= 384, max= 1024, per=3.58%, avg=679.95, stdev=162.61, samples=19 00:23:11.683 iops : min= 96, max= 256, avg=169.95, stdev=40.63, samples=19 00:23:11.683 lat (msec) : 2=0.89%, 4=0.89%, 10=1.79%, 50=5.86%, 100=56.92% 00:23:11.683 lat (msec) : 250=33.65% 00:23:11.683 cpu : usr=41.39%, sys=1.50%, ctx=1241, majf=0, minf=9 00:23:11.683 IO depths : 1=3.6%, 2=7.9%, 4=19.1%, 8=60.4%, 16=8.9%, 32=0.0%, >=64=0.0% 00:23:11.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 complete : 0=0.0%, 4=92.4%, 8=1.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 issued rwts: total=1792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.683 filename0: (groupid=0, jobs=1): err= 0: pid=97864: Thu Jul 25 12:30:24 2024 00:23:11.683 read: IOPS=175, BW=703KiB/s (720kB/s)(7040KiB/10014msec) 00:23:11.683 slat (usec): min=3, max=5023, avg=17.74, stdev=163.71 00:23:11.683 clat (msec): min=29, max=172, avg=90.90, stdev=24.36 00:23:11.683 lat (msec): min=29, max=172, avg=90.92, stdev=24.37 00:23:11.683 clat percentiles (msec): 00:23:11.683 | 1.00th=[ 41], 5.00th=[ 52], 10.00th=[ 63], 20.00th=[ 72], 00:23:11.683 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 90], 60.00th=[ 96], 00:23:11.683 | 70.00th=[ 103], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 133], 00:23:11.683 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 174], 99.95th=[ 174], 00:23:11.683 | 99.99th=[ 174] 00:23:11.683 bw ( KiB/s): min= 512, max= 1024, per=3.70%, avg=700.53, stdev=140.60, samples=19 00:23:11.683 iops : min= 128, max= 256, avg=175.11, stdev=35.17, samples=19 00:23:11.683 lat (msec) : 50=4.55%, 100=60.23%, 250=35.23% 00:23:11.683 cpu : usr=42.18%, sys=1.44%, ctx=1379, majf=0, minf=9 00:23:11.683 IO depths : 1=3.6%, 2=7.7%, 4=18.6%, 8=60.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:23:11.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 issued rwts: total=1760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.683 filename0: (groupid=0, jobs=1): err= 0: pid=97865: Thu Jul 25 12:30:24 2024 00:23:11.683 read: IOPS=223, BW=894KiB/s (915kB/s)(8960KiB/10024msec) 00:23:11.683 slat (usec): min=5, max=8023, avg=21.94, stdev=235.68 00:23:11.683 clat (msec): min=31, max=180, avg=71.40, stdev=23.26 00:23:11.683 lat (msec): min=31, max=180, avg=71.42, stdev=23.26 00:23:11.683 clat percentiles (msec): 00:23:11.683 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:23:11.683 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 73], 00:23:11.683 | 70.00th=[ 79], 80.00th=[ 90], 90.00th=[ 106], 95.00th=[ 116], 00:23:11.683 | 99.00th=[ 131], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:23:11.683 | 99.99th=[ 180] 00:23:11.683 bw ( KiB/s): min= 512, max= 1200, per=4.71%, avg=893.50, stdev=191.02, samples=20 00:23:11.683 iops : min= 128, max= 300, avg=223.35, stdev=47.80, samples=20 00:23:11.683 lat (msec) : 50=18.97%, 100=67.81%, 250=13.21% 00:23:11.683 cpu : usr=43.23%, sys=1.37%, ctx=1356, majf=0, minf=9 00:23:11.683 IO depths : 1=0.8%, 2=1.7%, 4=7.2%, 8=77.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:23:11.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 complete : 0=0.0%, 4=89.5%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.683 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.683 filename0: (groupid=0, jobs=1): err= 0: pid=97866: Thu Jul 25 12:30:24 2024 00:23:11.683 read: IOPS=168, BW=672KiB/s (689kB/s)(6728KiB/10006msec) 00:23:11.683 slat (usec): min=6, max=8049, avg=20.67, stdev=277.02 00:23:11.683 clat (msec): min=4, max=203, avg=95.01, stdev=29.40 00:23:11.683 lat (msec): min=4, max=203, avg=95.04, stdev=29.40 00:23:11.683 clat percentiles (msec): 00:23:11.683 | 1.00th=[ 22], 5.00th=[ 51], 10.00th=[ 69], 20.00th=[ 72], 00:23:11.683 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 95], 60.00th=[ 99], 00:23:11.683 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 132], 95.00th=[ 153], 00:23:11.683 | 99.00th=[ 169], 99.50th=[ 203], 99.90th=[ 205], 99.95th=[ 205], 00:23:11.683 | 99.99th=[ 205] 00:23:11.683 bw ( KiB/s): min= 464, max= 896, per=3.46%, avg=656.32, stdev=108.55, samples=19 00:23:11.683 iops : min= 116, max= 224, avg=164.05, stdev=27.14, samples=19 00:23:11.683 lat (msec) : 10=0.95%, 50=3.51%, 100=59.51%, 250=36.03% 00:23:11.683 cpu : usr=32.17%, sys=1.03%, ctx=982, majf=0, minf=9 00:23:11.683 IO depths : 1=3.7%, 2=7.6%, 4=18.0%, 8=61.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:23:11.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 complete : 0=0.0%, 4=92.1%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.683 issued rwts: total=1682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.684 filename0: (groupid=0, jobs=1): err= 0: pid=97867: Thu Jul 25 12:30:24 2024 00:23:11.684 read: IOPS=172, BW=688KiB/s (705kB/s)(6900KiB/10025msec) 00:23:11.684 slat (nsec): min=4878, max=39515, avg=11473.38, stdev=4577.44 00:23:11.684 clat (msec): min=33, max=195, avg=92.78, stdev=28.72 00:23:11.684 lat (msec): min=33, max=195, avg=92.80, stdev=28.72 00:23:11.684 clat percentiles (msec): 00:23:11.684 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 68], 00:23:11.684 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 91], 60.00th=[ 101], 00:23:11.684 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 146], 00:23:11.684 | 99.00th=[ 161], 99.50th=[ 184], 99.90th=[ 197], 99.95th=[ 197], 00:23:11.684 | 99.99th=[ 197] 00:23:11.684 bw ( KiB/s): min= 440, max= 896, per=3.63%, avg=687.25, stdev=137.22, samples=20 00:23:11.684 iops : min= 110, max= 224, avg=171.80, stdev=34.30, samples=20 00:23:11.684 lat (msec) : 50=5.80%, 100=54.20%, 250=40.00% 00:23:11.684 cpu : usr=39.87%, sys=1.10%, ctx=1229, majf=0, minf=9 00:23:11.684 IO depths : 1=2.3%, 2=5.0%, 4=14.4%, 8=67.4%, 16=11.0%, 32=0.0%, >=64=0.0% 00:23:11.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 issued rwts: total=1725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.684 filename1: (groupid=0, jobs=1): err= 0: pid=97868: Thu Jul 25 12:30:24 2024 00:23:11.684 read: IOPS=239, BW=957KiB/s (980kB/s)(9608KiB/10037msec) 00:23:11.684 slat (usec): min=7, max=6014, avg=16.00, stdev=148.87 00:23:11.684 clat (msec): min=2, max=136, avg=66.74, stdev=23.39 00:23:11.684 lat (msec): min=2, max=136, avg=66.75, stdev=23.39 00:23:11.684 clat percentiles (msec): 00:23:11.684 | 1.00th=[ 4], 5.00th=[ 35], 10.00th=[ 43], 20.00th=[ 49], 00:23:11.684 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:23:11.684 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 105], 00:23:11.684 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 138], 00:23:11.684 | 99.99th=[ 138] 00:23:11.684 bw ( KiB/s): min= 736, max= 1920, per=5.04%, avg=954.15, stdev=252.50, samples=20 00:23:11.684 iops : min= 184, max= 480, avg=238.50, stdev=63.15, samples=20 00:23:11.684 lat (msec) : 4=1.33%, 10=1.33%, 20=1.33%, 50=20.48%, 100=67.19% 00:23:11.684 lat (msec) : 250=8.33% 00:23:11.684 cpu : usr=44.89%, sys=1.49%, ctx=1944, majf=0, minf=9 00:23:11.684 IO depths : 1=1.1%, 2=2.5%, 4=10.7%, 8=73.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:23:11.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.684 filename1: (groupid=0, jobs=1): err= 0: pid=97869: Thu Jul 25 12:30:24 2024 00:23:11.684 read: IOPS=200, BW=804KiB/s (823kB/s)(8072KiB/10043msec) 00:23:11.684 slat (usec): min=5, max=8026, avg=21.02, stdev=229.12 00:23:11.684 clat (msec): min=13, max=180, avg=79.49, stdev=26.53 00:23:11.684 lat (msec): min=13, max=180, avg=79.51, stdev=26.54 00:23:11.684 clat percentiles (msec): 00:23:11.684 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 60], 00:23:11.684 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:23:11.684 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 116], 95.00th=[ 131], 00:23:11.684 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 182], 99.95th=[ 182], 00:23:11.684 | 99.99th=[ 182] 00:23:11.684 bw ( KiB/s): min= 520, max= 1024, per=4.23%, avg=801.30, stdev=145.07, samples=20 00:23:11.684 iops : min= 130, max= 256, avg=200.30, stdev=36.30, samples=20 00:23:11.684 lat (msec) : 20=0.79%, 50=12.49%, 100=70.02%, 250=16.70% 00:23:11.684 cpu : usr=34.96%, sys=1.20%, ctx=989, majf=0, minf=9 00:23:11.684 IO depths : 1=0.9%, 2=2.0%, 4=9.5%, 8=75.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:23:11.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 issued rwts: total=2018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.684 filename1: (groupid=0, jobs=1): err= 0: pid=97870: Thu Jul 25 12:30:24 2024 00:23:11.684 read: IOPS=168, BW=675KiB/s (692kB/s)(6764KiB/10014msec) 00:23:11.684 slat (usec): min=5, max=8023, avg=18.31, stdev=217.90 00:23:11.684 clat (msec): min=47, max=194, avg=94.64, stdev=25.11 00:23:11.684 lat (msec): min=47, max=194, avg=94.65, stdev=25.10 00:23:11.684 clat percentiles (msec): 00:23:11.684 | 1.00th=[ 48], 5.00th=[ 63], 10.00th=[ 70], 20.00th=[ 72], 00:23:11.684 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 102], 00:23:11.684 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 128], 95.00th=[ 140], 00:23:11.684 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 194], 99.95th=[ 194], 00:23:11.684 | 99.99th=[ 194] 00:23:11.684 bw ( KiB/s): min= 510, max= 784, per=3.52%, avg=666.89, stdev=106.22, samples=19 00:23:11.684 iops : min= 127, max= 196, avg=166.68, stdev=26.60, samples=19 00:23:11.684 lat (msec) : 50=2.19%, 100=56.53%, 250=41.28% 00:23:11.684 cpu : usr=37.22%, sys=1.35%, ctx=999, majf=0, minf=9 00:23:11.684 IO depths : 1=3.4%, 2=7.5%, 4=19.6%, 8=60.4%, 16=9.2%, 32=0.0%, >=64=0.0% 00:23:11.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 complete : 0=0.0%, 4=92.2%, 8=2.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 issued rwts: total=1691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.684 filename1: (groupid=0, jobs=1): err= 0: pid=97871: Thu Jul 25 12:30:24 2024 00:23:11.684 read: IOPS=213, BW=855KiB/s (876kB/s)(8576KiB/10026msec) 00:23:11.684 slat (usec): min=3, max=8042, avg=19.35, stdev=244.99 00:23:11.684 clat (msec): min=12, max=149, avg=74.65, stdev=22.02 00:23:11.684 lat (msec): min=12, max=149, avg=74.66, stdev=22.02 00:23:11.684 clat percentiles (msec): 00:23:11.684 | 1.00th=[ 19], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:23:11.684 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:23:11.684 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 114], 00:23:11.684 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 150], 99.95th=[ 150], 00:23:11.684 | 99.99th=[ 150] 00:23:11.684 bw ( KiB/s): min= 560, max= 1362, per=4.49%, avg=851.30, stdev=154.79, samples=20 00:23:11.684 iops : min= 140, max= 340, avg=212.80, stdev=38.61, samples=20 00:23:11.684 lat (msec) : 20=1.49%, 50=14.13%, 100=69.92%, 250=14.46% 00:23:11.684 cpu : usr=37.94%, sys=1.35%, ctx=1097, majf=0, minf=9 00:23:11.684 IO depths : 1=1.1%, 2=2.4%, 4=8.9%, 8=75.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:23:11.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 complete : 0=0.0%, 4=89.7%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.684 filename1: (groupid=0, jobs=1): err= 0: pid=97872: Thu Jul 25 12:30:24 2024 00:23:11.684 read: IOPS=196, BW=786KiB/s (805kB/s)(7872KiB/10012msec) 00:23:11.684 slat (usec): min=6, max=8028, avg=23.74, stdev=312.61 00:23:11.684 clat (msec): min=23, max=166, avg=81.28, stdev=25.49 00:23:11.684 lat (msec): min=23, max=166, avg=81.30, stdev=25.49 00:23:11.684 clat percentiles (msec): 00:23:11.684 | 1.00th=[ 34], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 60], 00:23:11.684 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:23:11.684 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 124], 00:23:11.684 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 167], 99.95th=[ 167], 00:23:11.684 | 99.99th=[ 167] 00:23:11.684 bw ( KiB/s): min= 512, max= 1072, per=4.10%, avg=777.37, stdev=160.91, samples=19 00:23:11.684 iops : min= 128, max= 268, avg=194.32, stdev=40.25, samples=19 00:23:11.684 lat (msec) : 50=9.96%, 100=67.12%, 250=22.92% 00:23:11.684 cpu : usr=39.08%, sys=1.36%, ctx=1081, majf=0, minf=9 00:23:11.684 IO depths : 1=1.7%, 2=3.7%, 4=11.4%, 8=71.4%, 16=11.8%, 32=0.0%, >=64=0.0% 00:23:11.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.684 filename1: (groupid=0, jobs=1): err= 0: pid=97873: Thu Jul 25 12:30:24 2024 00:23:11.684 read: IOPS=173, BW=695KiB/s (711kB/s)(6956KiB/10013msec) 00:23:11.684 slat (usec): min=4, max=8032, avg=28.44, stdev=362.61 00:23:11.684 clat (msec): min=24, max=192, avg=91.96, stdev=28.75 00:23:11.684 lat (msec): min=24, max=192, avg=91.99, stdev=28.77 00:23:11.684 clat percentiles (msec): 00:23:11.684 | 1.00th=[ 35], 5.00th=[ 54], 10.00th=[ 61], 20.00th=[ 72], 00:23:11.684 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 96], 00:23:11.684 | 70.00th=[ 107], 80.00th=[ 117], 90.00th=[ 131], 95.00th=[ 157], 00:23:11.684 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 192], 99.95th=[ 192], 00:23:11.684 | 99.99th=[ 192] 00:23:11.684 bw ( KiB/s): min= 512, max= 872, per=3.62%, avg=685.95, stdev=130.26, samples=19 00:23:11.684 iops : min= 128, max= 218, avg=171.47, stdev=32.58, samples=19 00:23:11.684 lat (msec) : 50=3.51%, 100=65.50%, 250=30.99% 00:23:11.684 cpu : usr=32.18%, sys=1.00%, ctx=962, majf=0, minf=9 00:23:11.684 IO depths : 1=2.1%, 2=4.5%, 4=13.4%, 8=69.1%, 16=10.9%, 32=0.0%, >=64=0.0% 00:23:11.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.684 issued rwts: total=1739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.684 filename1: (groupid=0, jobs=1): err= 0: pid=97874: Thu Jul 25 12:30:24 2024 00:23:11.684 read: IOPS=184, BW=739KiB/s (756kB/s)(7400KiB/10018msec) 00:23:11.684 slat (usec): min=3, max=8027, avg=22.69, stdev=297.84 00:23:11.684 clat (msec): min=22, max=195, avg=86.39, stdev=27.46 00:23:11.684 lat (msec): min=22, max=195, avg=86.42, stdev=27.48 00:23:11.684 clat percentiles (msec): 00:23:11.684 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 61], 00:23:11.684 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 85], 00:23:11.684 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 123], 95.00th=[ 140], 00:23:11.685 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 197], 99.95th=[ 197], 00:23:11.685 | 99.99th=[ 197] 00:23:11.685 bw ( KiB/s): min= 507, max= 992, per=3.80%, avg=720.58, stdev=149.68, samples=19 00:23:11.685 iops : min= 126, max= 248, avg=180.11, stdev=37.48, samples=19 00:23:11.685 lat (msec) : 50=7.46%, 100=63.62%, 250=28.92% 00:23:11.685 cpu : usr=32.60%, sys=1.00%, ctx=1262, majf=0, minf=9 00:23:11.685 IO depths : 1=1.6%, 2=3.7%, 4=12.0%, 8=70.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:23:11.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 issued rwts: total=1850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.685 filename1: (groupid=0, jobs=1): err= 0: pid=97875: Thu Jul 25 12:30:24 2024 00:23:11.685 read: IOPS=186, BW=748KiB/s (765kB/s)(7492KiB/10022msec) 00:23:11.685 slat (usec): min=3, max=8024, avg=21.47, stdev=244.88 00:23:11.685 clat (msec): min=38, max=168, avg=85.41, stdev=24.49 00:23:11.685 lat (msec): min=38, max=168, avg=85.43, stdev=24.49 00:23:11.685 clat percentiles (msec): 00:23:11.685 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 67], 00:23:11.685 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 88], 00:23:11.685 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 126], 00:23:11.685 | 99.00th=[ 146], 99.50th=[ 167], 99.90th=[ 169], 99.95th=[ 169], 00:23:11.685 | 99.99th=[ 169] 00:23:11.685 bw ( KiB/s): min= 464, max= 1072, per=3.93%, avg=745.25, stdev=148.82, samples=20 00:23:11.685 iops : min= 116, max= 268, avg=186.30, stdev=37.23, samples=20 00:23:11.685 lat (msec) : 50=7.53%, 100=63.11%, 250=29.36% 00:23:11.685 cpu : usr=38.88%, sys=1.21%, ctx=1041, majf=0, minf=9 00:23:11.685 IO depths : 1=2.5%, 2=5.1%, 4=13.7%, 8=68.1%, 16=10.6%, 32=0.0%, >=64=0.0% 00:23:11.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 issued rwts: total=1873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.685 filename2: (groupid=0, jobs=1): err= 0: pid=97876: Thu Jul 25 12:30:24 2024 00:23:11.685 read: IOPS=217, BW=869KiB/s (890kB/s)(8724KiB/10043msec) 00:23:11.685 slat (usec): min=4, max=8023, avg=23.79, stdev=257.41 00:23:11.685 clat (msec): min=12, max=179, avg=73.49, stdev=25.82 00:23:11.685 lat (msec): min=12, max=179, avg=73.52, stdev=25.82 00:23:11.685 clat percentiles (msec): 00:23:11.685 | 1.00th=[ 14], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 50], 00:23:11.685 | 30.00th=[ 56], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 79], 00:23:11.685 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 110], 95.00th=[ 118], 00:23:11.685 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:23:11.685 | 99.99th=[ 180] 00:23:11.685 bw ( KiB/s): min= 634, max= 1152, per=4.57%, avg=865.70, stdev=174.99, samples=20 00:23:11.685 iops : min= 158, max= 288, avg=216.40, stdev=43.78, samples=20 00:23:11.685 lat (msec) : 20=1.47%, 50=19.58%, 100=63.59%, 250=15.36% 00:23:11.685 cpu : usr=45.29%, sys=1.29%, ctx=1326, majf=0, minf=9 00:23:11.685 IO depths : 1=1.8%, 2=3.7%, 4=11.5%, 8=71.6%, 16=11.4%, 32=0.0%, >=64=0.0% 00:23:11.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 complete : 0=0.0%, 4=90.3%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 issued rwts: total=2181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.685 filename2: (groupid=0, jobs=1): err= 0: pid=97877: Thu Jul 25 12:30:24 2024 00:23:11.685 read: IOPS=203, BW=814KiB/s (834kB/s)(8152KiB/10013msec) 00:23:11.685 slat (usec): min=3, max=8018, avg=18.02, stdev=198.37 00:23:11.685 clat (msec): min=30, max=179, avg=78.39, stdev=24.89 00:23:11.685 lat (msec): min=30, max=179, avg=78.41, stdev=24.89 00:23:11.685 clat percentiles (msec): 00:23:11.685 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:23:11.685 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:23:11.685 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 112], 95.00th=[ 127], 00:23:11.685 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:23:11.685 | 99.99th=[ 180] 00:23:11.685 bw ( KiB/s): min= 512, max= 1074, per=4.27%, avg=808.60, stdev=153.68, samples=20 00:23:11.685 iops : min= 128, max= 268, avg=202.10, stdev=38.40, samples=20 00:23:11.685 lat (msec) : 50=13.64%, 100=69.23%, 250=17.12% 00:23:11.685 cpu : usr=38.25%, sys=1.19%, ctx=1080, majf=0, minf=9 00:23:11.685 IO depths : 1=1.6%, 2=3.3%, 4=10.6%, 8=72.5%, 16=12.0%, 32=0.0%, >=64=0.0% 00:23:11.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 issued rwts: total=2038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.685 filename2: (groupid=0, jobs=1): err= 0: pid=97878: Thu Jul 25 12:30:24 2024 00:23:11.685 read: IOPS=199, BW=798KiB/s (818kB/s)(8000KiB/10019msec) 00:23:11.685 slat (usec): min=5, max=8032, avg=15.65, stdev=179.42 00:23:11.685 clat (msec): min=24, max=213, avg=80.01, stdev=32.35 00:23:11.685 lat (msec): min=24, max=213, avg=80.03, stdev=32.35 00:23:11.685 clat percentiles (msec): 00:23:11.685 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:23:11.685 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 80], 00:23:11.685 | 70.00th=[ 86], 80.00th=[ 102], 90.00th=[ 126], 95.00th=[ 155], 00:23:11.685 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 213], 99.95th=[ 213], 00:23:11.685 | 99.99th=[ 213] 00:23:11.685 bw ( KiB/s): min= 384, max= 1072, per=4.16%, avg=787.37, stdev=226.58, samples=19 00:23:11.685 iops : min= 96, max= 268, avg=196.84, stdev=56.64, samples=19 00:23:11.685 lat (msec) : 50=17.00%, 100=62.10%, 250=20.90% 00:23:11.685 cpu : usr=38.26%, sys=1.12%, ctx=1092, majf=0, minf=9 00:23:11.685 IO depths : 1=0.9%, 2=1.8%, 4=9.5%, 8=75.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:23:11.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.685 filename2: (groupid=0, jobs=1): err= 0: pid=97879: Thu Jul 25 12:30:24 2024 00:23:11.685 read: IOPS=201, BW=805KiB/s (824kB/s)(8068KiB/10021msec) 00:23:11.685 slat (nsec): min=4981, max=58599, avg=11796.76, stdev=4887.71 00:23:11.685 clat (msec): min=36, max=190, avg=79.35, stdev=25.91 00:23:11.685 lat (msec): min=36, max=190, avg=79.37, stdev=25.91 00:23:11.685 clat percentiles (msec): 00:23:11.685 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:23:11.685 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:23:11.685 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 117], 95.00th=[ 130], 00:23:11.685 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 190], 99.95th=[ 190], 00:23:11.685 | 99.99th=[ 190] 00:23:11.685 bw ( KiB/s): min= 488, max= 1024, per=4.24%, avg=804.30, stdev=150.66, samples=20 00:23:11.685 iops : min= 122, max= 256, avg=201.05, stdev=37.68, samples=20 00:23:11.685 lat (msec) : 50=14.82%, 100=68.17%, 250=17.01% 00:23:11.685 cpu : usr=32.19%, sys=0.93%, ctx=879, majf=0, minf=9 00:23:11.685 IO depths : 1=1.3%, 2=3.0%, 4=10.3%, 8=73.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:23:11.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 issued rwts: total=2017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.685 filename2: (groupid=0, jobs=1): err= 0: pid=97880: Thu Jul 25 12:30:24 2024 00:23:11.685 read: IOPS=225, BW=901KiB/s (923kB/s)(9064KiB/10059msec) 00:23:11.685 slat (usec): min=6, max=8053, avg=25.56, stdev=313.20 00:23:11.685 clat (msec): min=2, max=156, avg=70.78, stdev=28.87 00:23:11.685 lat (msec): min=2, max=156, avg=70.81, stdev=28.88 00:23:11.685 clat percentiles (msec): 00:23:11.685 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 49], 00:23:11.685 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 71], 60.00th=[ 75], 00:23:11.685 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 109], 95.00th=[ 120], 00:23:11.685 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:23:11.685 | 99.99th=[ 157] 00:23:11.685 bw ( KiB/s): min= 508, max= 1632, per=4.75%, avg=899.80, stdev=243.55, samples=20 00:23:11.685 iops : min= 127, max= 408, avg=224.95, stdev=60.89, samples=20 00:23:11.685 lat (msec) : 4=1.41%, 10=2.12%, 20=0.71%, 50=20.43%, 100=59.89% 00:23:11.685 lat (msec) : 250=15.45% 00:23:11.685 cpu : usr=41.72%, sys=1.29%, ctx=1218, majf=0, minf=9 00:23:11.685 IO depths : 1=1.5%, 2=3.3%, 4=11.3%, 8=72.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:23:11.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.685 issued rwts: total=2266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.685 filename2: (groupid=0, jobs=1): err= 0: pid=97881: Thu Jul 25 12:30:24 2024 00:23:11.685 read: IOPS=209, BW=837KiB/s (857kB/s)(8400KiB/10032msec) 00:23:11.685 slat (usec): min=4, max=8043, avg=22.93, stdev=302.89 00:23:11.685 clat (msec): min=33, max=166, avg=76.27, stdev=24.48 00:23:11.685 lat (msec): min=33, max=166, avg=76.29, stdev=24.48 00:23:11.685 clat percentiles (msec): 00:23:11.685 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:23:11.685 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:23:11.685 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 121], 00:23:11.685 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 167], 00:23:11.685 | 99.99th=[ 167] 00:23:11.685 bw ( KiB/s): min= 592, max= 1072, per=4.41%, avg=835.35, stdev=144.69, samples=20 00:23:11.685 iops : min= 148, max= 268, avg=208.80, stdev=36.21, samples=20 00:23:11.685 lat (msec) : 50=17.67%, 100=68.38%, 250=13.95% 00:23:11.685 cpu : usr=32.00%, sys=1.12%, ctx=870, majf=0, minf=9 00:23:11.685 IO depths : 1=0.7%, 2=1.7%, 4=7.4%, 8=76.7%, 16=13.6%, 32=0.0%, >=64=0.0% 00:23:11.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.686 complete : 0=0.0%, 4=89.6%, 8=6.5%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.686 issued rwts: total=2100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.686 filename2: (groupid=0, jobs=1): err= 0: pid=97882: Thu Jul 25 12:30:24 2024 00:23:11.686 read: IOPS=222, BW=890KiB/s (911kB/s)(8952KiB/10057msec) 00:23:11.686 slat (usec): min=5, max=2317, avg=13.67, stdev=70.71 00:23:11.686 clat (msec): min=25, max=168, avg=71.66, stdev=28.68 00:23:11.686 lat (msec): min=25, max=168, avg=71.67, stdev=28.68 00:23:11.686 clat percentiles (msec): 00:23:11.686 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 00:23:11.686 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 72], 00:23:11.686 | 70.00th=[ 80], 80.00th=[ 94], 90.00th=[ 117], 95.00th=[ 134], 00:23:11.686 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:23:11.686 | 99.99th=[ 169] 00:23:11.686 bw ( KiB/s): min= 512, max= 1200, per=4.69%, avg=888.80, stdev=234.12, samples=20 00:23:11.686 iops : min= 128, max= 300, avg=222.20, stdev=58.53, samples=20 00:23:11.686 lat (msec) : 50=24.31%, 100=60.68%, 250=15.01% 00:23:11.686 cpu : usr=40.85%, sys=1.27%, ctx=1490, majf=0, minf=9 00:23:11.686 IO depths : 1=0.9%, 2=1.8%, 4=7.9%, 8=76.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:23:11.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.686 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.686 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.686 filename2: (groupid=0, jobs=1): err= 0: pid=97883: Thu Jul 25 12:30:24 2024 00:23:11.686 read: IOPS=206, BW=826KiB/s (846kB/s)(8288KiB/10029msec) 00:23:11.686 slat (usec): min=4, max=8036, avg=25.11, stdev=317.42 00:23:11.686 clat (msec): min=12, max=167, avg=77.23, stdev=26.29 00:23:11.686 lat (msec): min=12, max=167, avg=77.25, stdev=26.29 00:23:11.686 clat percentiles (msec): 00:23:11.686 | 1.00th=[ 20], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:23:11.686 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:23:11.686 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 120], 95.00th=[ 131], 00:23:11.686 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:23:11.686 | 99.99th=[ 167] 00:23:11.686 bw ( KiB/s): min= 560, max= 1120, per=4.35%, avg=824.80, stdev=163.64, samples=20 00:23:11.686 iops : min= 140, max= 280, avg=206.20, stdev=40.91, samples=20 00:23:11.686 lat (msec) : 20=1.54%, 50=15.40%, 100=65.64%, 250=17.42% 00:23:11.686 cpu : usr=32.37%, sys=0.97%, ctx=957, majf=0, minf=9 00:23:11.686 IO depths : 1=1.5%, 2=3.0%, 4=10.7%, 8=72.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:23:11.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.686 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.686 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:11.686 00:23:11.686 Run status group 0 (all jobs): 00:23:11.686 READ: bw=18.5MiB/s (19.4MB/s), 672KiB/s-957KiB/s (689kB/s-980kB/s), io=186MiB (195MB), run=10001-10059msec 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 bdev_null0 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 [2024-07-25 12:30:25.094314] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 bdev_null1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.686 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.687 { 00:23:11.687 "params": { 00:23:11.687 "name": "Nvme$subsystem", 00:23:11.687 "trtype": "$TEST_TRANSPORT", 00:23:11.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.687 "adrfam": "ipv4", 00:23:11.687 "trsvcid": "$NVMF_PORT", 00:23:11.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.687 "hdgst": ${hdgst:-false}, 00:23:11.687 "ddgst": ${ddgst:-false} 00:23:11.687 }, 00:23:11.687 "method": "bdev_nvme_attach_controller" 00:23:11.687 } 00:23:11.687 EOF 00:23:11.687 )") 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.687 { 00:23:11.687 "params": { 00:23:11.687 "name": "Nvme$subsystem", 00:23:11.687 "trtype": "$TEST_TRANSPORT", 00:23:11.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.687 "adrfam": "ipv4", 00:23:11.687 "trsvcid": "$NVMF_PORT", 00:23:11.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.687 "hdgst": ${hdgst:-false}, 00:23:11.687 "ddgst": ${ddgst:-false} 00:23:11.687 }, 00:23:11.687 "method": "bdev_nvme_attach_controller" 00:23:11.687 } 00:23:11.687 EOF 00:23:11.687 )") 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:11.687 "params": { 00:23:11.687 "name": "Nvme0", 00:23:11.687 "trtype": "tcp", 00:23:11.687 "traddr": "10.0.0.2", 00:23:11.687 "adrfam": "ipv4", 00:23:11.687 "trsvcid": "4420", 00:23:11.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:11.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:11.687 "hdgst": false, 00:23:11.687 "ddgst": false 00:23:11.687 }, 00:23:11.687 "method": "bdev_nvme_attach_controller" 00:23:11.687 },{ 00:23:11.687 "params": { 00:23:11.687 "name": "Nvme1", 00:23:11.687 "trtype": "tcp", 00:23:11.687 "traddr": "10.0.0.2", 00:23:11.687 "adrfam": "ipv4", 00:23:11.687 "trsvcid": "4420", 00:23:11.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.687 "hdgst": false, 00:23:11.687 "ddgst": false 00:23:11.687 }, 00:23:11.687 "method": "bdev_nvme_attach_controller" 00:23:11.687 }' 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:11.687 12:30:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:11.687 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:11.687 ... 00:23:11.687 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:11.687 ... 00:23:11.687 fio-3.35 00:23:11.687 Starting 4 threads 00:23:16.953 00:23:16.953 filename0: (groupid=0, jobs=1): err= 0: pid=98009: Thu Jul 25 12:30:31 2024 00:23:16.953 read: IOPS=1970, BW=15.4MiB/s (16.1MB/s)(77.0MiB/5003msec) 00:23:16.953 slat (nsec): min=5617, max=36751, avg=13504.00, stdev=4585.89 00:23:16.953 clat (usec): min=3040, max=8120, avg=3999.61, stdev=157.17 00:23:16.953 lat (usec): min=3055, max=8140, avg=4013.12, stdev=156.56 00:23:16.953 clat percentiles (usec): 00:23:16.953 | 1.00th=[ 3884], 5.00th=[ 3916], 10.00th=[ 3916], 20.00th=[ 3949], 00:23:16.953 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 3982], 60.00th=[ 4015], 00:23:16.953 | 70.00th=[ 4015], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4080], 00:23:16.953 | 99.00th=[ 4359], 99.50th=[ 4752], 99.90th=[ 5997], 99.95th=[ 8094], 00:23:16.953 | 99.99th=[ 8094] 00:23:16.953 bw ( KiB/s): min=15616, max=15872, per=25.04%, avg=15783.00, stdev=101.08, samples=9 00:23:16.953 iops : min= 1952, max= 1984, avg=1972.78, stdev=12.78, samples=9 00:23:16.953 lat (msec) : 4=58.37%, 10=41.63% 00:23:16.953 cpu : usr=94.04%, sys=4.78%, ctx=54, majf=0, minf=9 00:23:16.953 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:16.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.953 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.953 issued rwts: total=9856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.953 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:16.953 filename0: (groupid=0, jobs=1): err= 0: pid=98010: Thu Jul 25 12:30:31 2024 00:23:16.953 read: IOPS=1970, BW=15.4MiB/s (16.1MB/s)(77.0MiB/5001msec) 00:23:16.953 slat (nsec): min=3978, max=63402, avg=15261.75, stdev=4315.83 00:23:16.953 clat (usec): min=2776, max=7290, avg=3981.19, stdev=122.40 00:23:16.953 lat (usec): min=2788, max=7303, avg=3996.45, stdev=122.75 00:23:16.953 clat percentiles (usec): 00:23:16.953 | 1.00th=[ 3884], 5.00th=[ 3884], 10.00th=[ 3916], 20.00th=[ 3916], 00:23:16.953 | 30.00th=[ 3949], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 3982], 00:23:16.953 | 70.00th=[ 4015], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4047], 00:23:16.953 | 99.00th=[ 4293], 99.50th=[ 4686], 99.90th=[ 6063], 99.95th=[ 6063], 00:23:16.953 | 99.99th=[ 7308] 00:23:16.953 bw ( KiB/s): min=15616, max=15903, per=25.05%, avg=15790.11, stdev=94.66, samples=9 00:23:16.953 iops : min= 1952, max= 1987, avg=1973.67, stdev=11.70, samples=9 00:23:16.953 lat (msec) : 4=71.60%, 10=28.40% 00:23:16.953 cpu : usr=93.64%, sys=5.08%, ctx=1031, majf=0, minf=9 00:23:16.954 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:16.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.954 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.954 issued rwts: total=9856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.954 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:16.954 filename1: (groupid=0, jobs=1): err= 0: pid=98011: Thu Jul 25 12:30:31 2024 00:23:16.954 read: IOPS=1969, BW=15.4MiB/s (16.1MB/s)(77.0MiB/5004msec) 00:23:16.954 slat (nsec): min=4218, max=48447, avg=8944.03, stdev=2917.99 00:23:16.954 clat (usec): min=3838, max=9518, avg=4014.43, stdev=172.35 00:23:16.954 lat (usec): min=3849, max=9549, avg=4023.37, stdev=172.55 00:23:16.954 clat percentiles (usec): 00:23:16.954 | 1.00th=[ 3884], 5.00th=[ 3949], 10.00th=[ 3949], 20.00th=[ 3982], 00:23:16.954 | 30.00th=[ 3982], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4015], 00:23:16.954 | 70.00th=[ 4015], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4080], 00:23:16.954 | 99.00th=[ 4293], 99.50th=[ 4621], 99.90th=[ 5211], 99.95th=[ 9372], 00:23:16.954 | 99.99th=[ 9503] 00:23:16.954 bw ( KiB/s): min=15616, max=15872, per=25.03%, avg=15779.56, stdev=106.67, samples=9 00:23:16.954 iops : min= 1952, max= 1984, avg=1972.44, stdev=13.33, samples=9 00:23:16.954 lat (msec) : 4=49.71%, 10=50.29% 00:23:16.954 cpu : usr=93.82%, sys=5.04%, ctx=9, majf=0, minf=9 00:23:16.954 IO depths : 1=10.8%, 2=25.0%, 4=50.0%, 8=14.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:16.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.954 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.954 issued rwts: total=9856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.954 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:16.954 filename1: (groupid=0, jobs=1): err= 0: pid=98012: Thu Jul 25 12:30:31 2024 00:23:16.954 read: IOPS=1972, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5001msec) 00:23:16.954 slat (usec): min=7, max=573, avg=15.86, stdev= 6.68 00:23:16.954 clat (usec): min=1122, max=5942, avg=3977.43, stdev=136.49 00:23:16.954 lat (usec): min=1130, max=5956, avg=3993.30, stdev=136.65 00:23:16.954 clat percentiles (usec): 00:23:16.954 | 1.00th=[ 3884], 5.00th=[ 3884], 10.00th=[ 3916], 20.00th=[ 3916], 00:23:16.954 | 30.00th=[ 3949], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 3982], 00:23:16.954 | 70.00th=[ 4015], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4080], 00:23:16.954 | 99.00th=[ 4359], 99.50th=[ 4752], 99.90th=[ 5342], 99.95th=[ 5342], 00:23:16.954 | 99.99th=[ 5932] 00:23:16.954 bw ( KiB/s): min=15616, max=15903, per=25.05%, avg=15793.56, stdev=93.32, samples=9 00:23:16.954 iops : min= 1952, max= 1987, avg=1974.00, stdev=11.57, samples=9 00:23:16.954 lat (msec) : 2=0.08%, 4=71.01%, 10=28.91% 00:23:16.954 cpu : usr=93.98%, sys=4.82%, ctx=9, majf=0, minf=9 00:23:16.954 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:16.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.954 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.954 issued rwts: total=9864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.954 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:16.954 00:23:16.954 Run status group 0 (all jobs): 00:23:16.954 READ: bw=61.6MiB/s (64.6MB/s), 15.4MiB/s-15.4MiB/s (16.1MB/s-16.2MB/s), io=308MiB (323MB), run=5001-5004msec 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:16.954 ************************************ 00:23:16.954 END TEST fio_dif_rand_params 00:23:16.954 ************************************ 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.954 00:23:16.954 real 0m23.799s 00:23:16.954 user 2m5.846s 00:23:16.954 sys 0m5.666s 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:16.954 12:30:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:16.954 12:30:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:16.954 12:30:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:16.954 12:30:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.954 12:30:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:16.954 ************************************ 00:23:16.954 START TEST fio_dif_digest 00:23:16.954 ************************************ 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:16.954 bdev_null0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:16.954 [2024-07-25 12:30:31.361965] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:16.954 12:30:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.954 { 00:23:16.954 "params": { 00:23:16.954 "name": "Nvme$subsystem", 00:23:16.954 "trtype": "$TEST_TRANSPORT", 00:23:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.955 "adrfam": "ipv4", 00:23:16.955 "trsvcid": "$NVMF_PORT", 00:23:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.955 "hdgst": ${hdgst:-false}, 00:23:16.955 "ddgst": ${ddgst:-false} 00:23:16.955 }, 00:23:16.955 "method": "bdev_nvme_attach_controller" 00:23:16.955 } 00:23:16.955 EOF 00:23:16.955 )") 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:16.955 "params": { 00:23:16.955 "name": "Nvme0", 00:23:16.955 "trtype": "tcp", 00:23:16.955 "traddr": "10.0.0.2", 00:23:16.955 "adrfam": "ipv4", 00:23:16.955 "trsvcid": "4420", 00:23:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.955 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:16.955 "hdgst": true, 00:23:16.955 "ddgst": true 00:23:16.955 }, 00:23:16.955 "method": "bdev_nvme_attach_controller" 00:23:16.955 }' 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:16.955 12:30:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:16.955 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:16.955 ... 00:23:16.955 fio-3.35 00:23:16.955 Starting 3 threads 00:23:29.168 00:23:29.168 filename0: (groupid=0, jobs=1): err= 0: pid=98113: Thu Jul 25 12:30:42 2024 00:23:29.168 read: IOPS=243, BW=30.5MiB/s (32.0MB/s)(305MiB/10005msec) 00:23:29.168 slat (nsec): min=6185, max=71908, avg=13770.55, stdev=4156.71 00:23:29.168 clat (usec): min=9319, max=54499, avg=12278.08, stdev=2166.98 00:23:29.168 lat (usec): min=9333, max=54511, avg=12291.85, stdev=2167.10 00:23:29.168 clat percentiles (usec): 00:23:29.168 | 1.00th=[10290], 5.00th=[10945], 10.00th=[11207], 20.00th=[11600], 00:23:29.168 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:23:29.168 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13042], 95.00th=[13435], 00:23:29.168 | 99.00th=[14091], 99.50th=[14353], 99.90th=[53216], 99.95th=[54264], 00:23:29.168 | 99.99th=[54264] 00:23:29.168 bw ( KiB/s): min=28672, max=32000, per=38.70%, avg=31225.16, stdev=911.29, samples=19 00:23:29.168 iops : min= 224, max= 250, avg=243.89, stdev= 7.10, samples=19 00:23:29.168 lat (msec) : 10=0.41%, 20=99.34%, 100=0.25% 00:23:29.168 cpu : usr=92.47%, sys=6.09%, ctx=28, majf=0, minf=0 00:23:29.168 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:29.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.168 issued rwts: total=2441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.168 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:29.168 filename0: (groupid=0, jobs=1): err= 0: pid=98114: Thu Jul 25 12:30:42 2024 00:23:29.168 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(269MiB/10003msec) 00:23:29.168 slat (nsec): min=7898, max=44651, avg=13410.91, stdev=3601.87 00:23:29.168 clat (usec): min=7192, max=19947, avg=13906.13, stdev=1210.25 00:23:29.168 lat (usec): min=7205, max=19969, avg=13919.54, stdev=1210.32 00:23:29.168 clat percentiles (usec): 00:23:29.168 | 1.00th=[ 8717], 5.00th=[12256], 10.00th=[12649], 20.00th=[13042], 00:23:29.168 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:23:29.168 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:23:29.168 | 99.00th=[16319], 99.50th=[16909], 99.90th=[19792], 99.95th=[20055], 00:23:29.168 | 99.99th=[20055] 00:23:29.168 bw ( KiB/s): min=26112, max=29952, per=34.18%, avg=27580.63, stdev=918.44, samples=19 00:23:29.168 iops : min= 204, max= 234, avg=215.47, stdev= 7.18, samples=19 00:23:29.168 lat (msec) : 10=1.30%, 20=98.70% 00:23:29.168 cpu : usr=93.05%, sys=5.61%, ctx=14, majf=0, minf=0 00:23:29.168 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:29.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.168 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.168 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:29.168 filename0: (groupid=0, jobs=1): err= 0: pid=98115: Thu Jul 25 12:30:42 2024 00:23:29.168 read: IOPS=172, BW=21.6MiB/s (22.7MB/s)(217MiB/10044msec) 00:23:29.168 slat (nsec): min=8089, max=41856, avg=13610.68, stdev=3224.05 00:23:29.168 clat (usec): min=10517, max=51280, avg=17314.72, stdev=1517.46 00:23:29.168 lat (usec): min=10529, max=51293, avg=17328.33, stdev=1517.47 00:23:29.168 clat percentiles (usec): 00:23:29.168 | 1.00th=[12649], 5.00th=[15795], 10.00th=[16188], 20.00th=[16581], 00:23:29.168 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17433], 00:23:29.168 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[18744], 00:23:29.168 | 99.00th=[19268], 99.50th=[19530], 99.90th=[47973], 99.95th=[51119], 00:23:29.168 | 99.99th=[51119] 00:23:29.168 bw ( KiB/s): min=21504, max=23040, per=27.51%, avg=22197.40, stdev=333.94, samples=20 00:23:29.168 iops : min= 168, max= 180, avg=173.40, stdev= 2.60, samples=20 00:23:29.168 lat (msec) : 20=99.65%, 50=0.29%, 100=0.06% 00:23:29.168 cpu : usr=93.32%, sys=5.49%, ctx=17, majf=0, minf=0 00:23:29.168 IO depths : 1=4.4%, 2=95.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:29.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.169 issued rwts: total=1736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.169 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:29.169 00:23:29.169 Run status group 0 (all jobs): 00:23:29.169 READ: bw=78.8MiB/s (82.6MB/s), 21.6MiB/s-30.5MiB/s (22.7MB/s-32.0MB/s), io=792MiB (830MB), run=10003-10044msec 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:29.169 ************************************ 00:23:29.169 END TEST fio_dif_digest 00:23:29.169 ************************************ 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.169 00:23:29.169 real 0m11.035s 00:23:29.169 user 0m28.631s 00:23:29.169 sys 0m1.971s 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.169 12:30:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:29.169 12:30:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:29.169 12:30:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:29.169 rmmod nvme_tcp 00:23:29.169 rmmod nvme_fabrics 00:23:29.169 rmmod nvme_keyring 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97362 ']' 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97362 00:23:29.169 12:30:42 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 97362 ']' 00:23:29.169 12:30:42 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 97362 00:23:29.169 12:30:42 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:23:29.169 12:30:42 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.169 12:30:42 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97362 00:23:29.169 killing process with pid 97362 00:23:29.169 12:30:42 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.169 12:30:42 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.169 12:30:42 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97362' 00:23:29.169 12:30:42 nvmf_dif -- common/autotest_common.sh@969 -- # kill 97362 00:23:29.169 12:30:42 nvmf_dif -- common/autotest_common.sh@974 -- # wait 97362 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:29.169 12:30:42 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:29.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:29.169 Waiting for block devices as requested 00:23:29.169 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:29.169 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:29.169 12:30:43 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:29.169 12:30:43 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:29.169 12:30:43 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.169 12:30:43 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:29.169 12:30:43 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.169 12:30:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:29.169 12:30:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.169 12:30:43 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:29.169 00:23:29.169 real 1m0.191s 00:23:29.169 user 3m52.075s 00:23:29.169 sys 0m14.982s 00:23:29.169 12:30:43 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.169 12:30:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:29.169 ************************************ 00:23:29.169 END TEST nvmf_dif 00:23:29.169 ************************************ 00:23:29.169 12:30:43 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:29.169 12:30:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:29.169 12:30:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:29.169 12:30:43 -- common/autotest_common.sh@10 -- # set +x 00:23:29.169 ************************************ 00:23:29.169 START TEST nvmf_abort_qd_sizes 00:23:29.169 ************************************ 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:29.169 * Looking for test storage... 00:23:29.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.169 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:29.170 Cannot find device "nvmf_tgt_br" 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.170 Cannot find device "nvmf_tgt_br2" 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:29.170 Cannot find device "nvmf_tgt_br" 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:29.170 Cannot find device "nvmf_tgt_br2" 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:29.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:23:29.170 00:23:29.170 --- 10.0.0.2 ping statistics --- 00:23:29.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.170 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:29.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:29.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:23:29.170 00:23:29.170 --- 10.0.0.3 ping statistics --- 00:23:29.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.170 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:29.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:23:29.170 00:23:29.170 --- 10.0.0.1 ping statistics --- 00:23:29.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.170 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:29.170 12:30:43 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:29.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:29.779 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:29.779 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=98707 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 98707 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 98707 ']' 00:23:29.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:29.779 12:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:30.037 [2024-07-25 12:30:44.659333] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:23:30.037 [2024-07-25 12:30:44.659442] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.037 [2024-07-25 12:30:44.799804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.294 [2024-07-25 12:30:44.946052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.294 [2024-07-25 12:30:44.946127] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.294 [2024-07-25 12:30:44.946141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.294 [2024-07-25 12:30:44.946151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.294 [2024-07-25 12:30:44.946161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.294 [2024-07-25 12:30:44.946341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.294 [2024-07-25 12:30:44.946419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.294 [2024-07-25 12:30:44.947026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.294 [2024-07-25 12:30:44.947050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:23:30.860 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.119 12:30:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:31.119 ************************************ 00:23:31.119 START TEST spdk_target_abort 00:23:31.119 ************************************ 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:31.119 spdk_targetn1 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:31.119 [2024-07-25 12:30:45.847890] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.119 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:31.119 [2024-07-25 12:30:45.876077] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:31.120 12:30:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:34.406 Initializing NVMe Controllers 00:23:34.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:34.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:34.406 Initialization complete. Launching workers. 00:23:34.406 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11205, failed: 0 00:23:34.406 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1071, failed to submit 10134 00:23:34.406 success 774, unsuccess 297, failed 0 00:23:34.406 12:30:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:34.406 12:30:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:37.740 Initializing NVMe Controllers 00:23:37.740 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:37.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:37.740 Initialization complete. Launching workers. 00:23:37.740 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5980, failed: 0 00:23:37.740 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1246, failed to submit 4734 00:23:37.740 success 233, unsuccess 1013, failed 0 00:23:37.740 12:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:37.740 12:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:41.021 Initializing NVMe Controllers 00:23:41.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:41.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:41.021 Initialization complete. Launching workers. 00:23:41.021 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30665, failed: 0 00:23:41.021 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2572, failed to submit 28093 00:23:41.021 success 467, unsuccess 2105, failed 0 00:23:41.021 12:30:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:41.021 12:30:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.021 12:30:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:41.021 12:30:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.021 12:30:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:41.021 12:30:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.021 12:30:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98707 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 98707 ']' 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 98707 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98707 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:41.956 killing process with pid 98707 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98707' 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 98707 00:23:41.956 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 98707 00:23:42.218 00:23:42.218 real 0m11.233s 00:23:42.218 user 0m45.852s 00:23:42.218 sys 0m1.718s 00:23:42.218 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.218 12:30:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:42.218 ************************************ 00:23:42.218 END TEST spdk_target_abort 00:23:42.218 ************************************ 00:23:42.218 12:30:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:42.218 12:30:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:42.218 12:30:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.218 12:30:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:42.218 ************************************ 00:23:42.218 START TEST kernel_target_abort 00:23:42.218 ************************************ 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:42.218 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:42.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:42.785 Waiting for block devices as requested 00:23:42.785 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:42.785 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:42.785 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:42.785 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:42.785 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:42.785 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:42.785 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:42.786 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:42.786 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:42.786 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:42.786 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:43.044 No valid GPT data, bailing 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:43.044 No valid GPT data, bailing 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:43.044 No valid GPT data, bailing 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:43.044 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:43.303 No valid GPT data, bailing 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 --hostid=12aea35b-e003-4c84-a908-b9044a3380d2 -a 10.0.0.1 -t tcp -s 4420 00:23:43.303 00:23:43.303 Discovery Log Number of Records 2, Generation counter 2 00:23:43.303 =====Discovery Log Entry 0====== 00:23:43.303 trtype: tcp 00:23:43.303 adrfam: ipv4 00:23:43.303 subtype: current discovery subsystem 00:23:43.303 treq: not specified, sq flow control disable supported 00:23:43.303 portid: 1 00:23:43.303 trsvcid: 4420 00:23:43.303 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:43.303 traddr: 10.0.0.1 00:23:43.303 eflags: none 00:23:43.303 sectype: none 00:23:43.303 =====Discovery Log Entry 1====== 00:23:43.303 trtype: tcp 00:23:43.303 adrfam: ipv4 00:23:43.303 subtype: nvme subsystem 00:23:43.303 treq: not specified, sq flow control disable supported 00:23:43.303 portid: 1 00:23:43.303 trsvcid: 4420 00:23:43.303 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:43.303 traddr: 10.0.0.1 00:23:43.303 eflags: none 00:23:43.303 sectype: none 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:43.303 12:30:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:43.303 12:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:46.586 Initializing NVMe Controllers 00:23:46.586 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:46.586 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:46.586 Initialization complete. Launching workers. 00:23:46.586 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34226, failed: 0 00:23:46.586 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34226, failed to submit 0 00:23:46.586 success 0, unsuccess 34226, failed 0 00:23:46.586 12:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:46.586 12:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:49.870 Initializing NVMe Controllers 00:23:49.870 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:49.870 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:49.870 Initialization complete. Launching workers. 00:23:49.870 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66085, failed: 0 00:23:49.870 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28402, failed to submit 37683 00:23:49.870 success 0, unsuccess 28402, failed 0 00:23:49.870 12:31:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:49.870 12:31:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:53.152 Initializing NVMe Controllers 00:23:53.152 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:53.152 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:53.152 Initialization complete. Launching workers. 00:23:53.152 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81177, failed: 0 00:23:53.152 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20292, failed to submit 60885 00:23:53.152 success 0, unsuccess 20292, failed 0 00:23:53.152 12:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:53.152 12:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:53.152 12:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:23:53.152 12:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.152 12:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:53.152 12:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:53.152 12:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.152 12:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:53.152 12:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:53.152 12:31:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:53.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:56.694 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:56.694 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:56.694 00:23:56.694 real 0m14.096s 00:23:56.694 user 0m6.239s 00:23:56.694 sys 0m5.144s 00:23:56.694 12:31:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.694 ************************************ 00:23:56.694 12:31:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:56.694 END TEST kernel_target_abort 00:23:56.694 ************************************ 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.694 rmmod nvme_tcp 00:23:56.694 rmmod nvme_fabrics 00:23:56.694 rmmod nvme_keyring 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.694 Process with pid 98707 is not found 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 98707 ']' 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 98707 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 98707 ']' 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 98707 00:23:56.694 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (98707) - No such process 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 98707 is not found' 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:56.694 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:56.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:56.952 Waiting for block devices as requested 00:23:56.952 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:56.952 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:57.210 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:57.210 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:57.210 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.210 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.210 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.210 12:31:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:57.210 12:31:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.210 12:31:11 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:57.210 00:23:57.210 real 0m28.529s 00:23:57.210 user 0m53.251s 00:23:57.210 sys 0m8.188s 00:23:57.210 12:31:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:57.210 12:31:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:57.210 ************************************ 00:23:57.210 END TEST nvmf_abort_qd_sizes 00:23:57.210 ************************************ 00:23:57.210 12:31:11 -- spdk/autotest.sh@299 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:57.210 12:31:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:57.210 12:31:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:57.210 12:31:11 -- common/autotest_common.sh@10 -- # set +x 00:23:57.210 ************************************ 00:23:57.210 START TEST keyring_file 00:23:57.210 ************************************ 00:23:57.210 12:31:11 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:57.210 * Looking for test storage... 00:23:57.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:57.210 12:31:11 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:57.210 12:31:11 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:57.210 12:31:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:57.210 12:31:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.210 12:31:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.210 12:31:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.210 12:31:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.210 12:31:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.210 12:31:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:57.211 12:31:12 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.211 12:31:12 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.211 12:31:12 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.211 12:31:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.211 12:31:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.211 12:31:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.211 12:31:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:57.211 12:31:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@47 -- # : 0 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.211 12:31:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:57.211 12:31:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:57.211 12:31:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:57.211 12:31:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:57.211 12:31:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:57.211 12:31:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:57.211 12:31:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:57.211 12:31:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:57.211 12:31:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:57.211 12:31:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:57.211 12:31:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:57.211 12:31:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:57.211 12:31:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NIkDH1wi9Z 00:23:57.211 12:31:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:57.211 12:31:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:57.211 12:31:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NIkDH1wi9Z 00:23:57.470 12:31:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NIkDH1wi9Z 00:23:57.470 12:31:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.NIkDH1wi9Z 00:23:57.470 12:31:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:57.470 12:31:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:57.470 12:31:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:57.470 12:31:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:57.470 12:31:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:57.470 12:31:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:57.470 12:31:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PARqRXsaAe 00:23:57.470 12:31:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:57.470 12:31:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:57.470 12:31:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:57.470 12:31:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:57.470 12:31:12 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:57.470 12:31:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:57.470 12:31:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:57.470 12:31:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PARqRXsaAe 00:23:57.470 12:31:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PARqRXsaAe 00:23:57.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.470 12:31:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.PARqRXsaAe 00:23:57.470 12:31:12 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:57.470 12:31:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=99592 00:23:57.470 12:31:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99592 00:23:57.470 12:31:12 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99592 ']' 00:23:57.470 12:31:12 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.470 12:31:12 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.470 12:31:12 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.470 12:31:12 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.470 12:31:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 [2024-07-25 12:31:12.224641] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:23:57.470 [2024-07-25 12:31:12.225066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99592 ] 00:23:57.729 [2024-07-25 12:31:12.370724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.729 [2024-07-25 12:31:12.498946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:23:58.663 12:31:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:58.663 [2024-07-25 12:31:13.218862] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.663 null0 00:23:58.663 [2024-07-25 12:31:13.250834] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.663 [2024-07-25 12:31:13.251235] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:58.663 [2024-07-25 12:31:13.258807] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.663 12:31:13 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:58.663 [2024-07-25 12:31:13.274864] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:58.663 2024/07/25 12:31:13 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:23:58.663 request: 00:23:58.663 { 00:23:58.663 "method": "nvmf_subsystem_add_listener", 00:23:58.663 "params": { 00:23:58.663 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.663 "secure_channel": false, 00:23:58.663 "listen_address": { 00:23:58.663 "trtype": "tcp", 00:23:58.663 "traddr": "127.0.0.1", 00:23:58.663 "trsvcid": "4420" 00:23:58.663 } 00:23:58.663 } 00:23:58.663 } 00:23:58.663 Got JSON-RPC error response 00:23:58.663 GoRPCClient: error on JSON-RPC call 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:58.663 12:31:13 keyring_file -- keyring/file.sh@46 -- # bperfpid=99627 00:23:58.663 12:31:13 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:58.663 12:31:13 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99627 /var/tmp/bperf.sock 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99627 ']' 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:58.663 12:31:13 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:58.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:58.664 12:31:13 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:58.664 12:31:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:58.664 [2024-07-25 12:31:13.339851] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:23:58.664 [2024-07-25 12:31:13.340254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99627 ] 00:23:58.664 [2024-07-25 12:31:13.477482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.922 [2024-07-25 12:31:13.608939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.489 12:31:14 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.489 12:31:14 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:23:59.489 12:31:14 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIkDH1wi9Z 00:23:59.489 12:31:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NIkDH1wi9Z 00:23:59.748 12:31:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.PARqRXsaAe 00:23:59.748 12:31:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.PARqRXsaAe 00:24:00.006 12:31:14 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:00.006 12:31:14 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:00.006 12:31:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:00.006 12:31:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:00.006 12:31:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:00.264 12:31:15 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.NIkDH1wi9Z == \/\t\m\p\/\t\m\p\.\N\I\k\D\H\1\w\i\9\Z ]] 00:24:00.264 12:31:15 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:00.264 12:31:15 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:00.264 12:31:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:00.264 12:31:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:00.264 12:31:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:00.520 12:31:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.PARqRXsaAe == \/\t\m\p\/\t\m\p\.\P\A\R\q\R\X\s\a\A\e ]] 00:24:00.520 12:31:15 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:00.520 12:31:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:00.520 12:31:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:00.520 12:31:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:00.520 12:31:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:00.520 12:31:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:00.776 12:31:15 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:00.776 12:31:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:00.776 12:31:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:00.776 12:31:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:00.776 12:31:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:00.776 12:31:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:00.777 12:31:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:01.341 12:31:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:01.341 12:31:15 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:01.341 12:31:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:01.342 [2024-07-25 12:31:16.134033] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.342 nvme0n1 00:24:01.599 12:31:16 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:01.599 12:31:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:01.599 12:31:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:01.599 12:31:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:01.599 12:31:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:01.599 12:31:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:01.856 12:31:16 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:01.856 12:31:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:24:01.856 12:31:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:01.856 12:31:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:01.856 12:31:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:01.856 12:31:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:01.856 12:31:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:02.134 12:31:16 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:02.134 12:31:16 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.134 Running I/O for 1 seconds... 00:24:03.069 00:24:03.069 Latency(us) 00:24:03.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.069 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:03.069 nvme0n1 : 1.01 11433.01 44.66 0.00 0.00 11156.68 5600.35 21567.30 00:24:03.069 =================================================================================================================== 00:24:03.069 Total : 11433.01 44.66 0.00 0.00 11156.68 5600.35 21567.30 00:24:03.069 0 00:24:03.069 12:31:17 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:03.069 12:31:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:03.327 12:31:18 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:24:03.327 12:31:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:03.327 12:31:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:03.327 12:31:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:03.327 12:31:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:03.327 12:31:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:03.892 12:31:18 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:03.892 12:31:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:24:03.892 12:31:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:03.892 12:31:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:03.892 12:31:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:03.892 12:31:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:03.892 12:31:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:04.150 12:31:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:04.150 12:31:18 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:04.150 12:31:18 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:04.150 12:31:18 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:04.150 12:31:18 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:04.150 12:31:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.150 12:31:18 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:04.150 12:31:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.150 12:31:18 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:04.150 12:31:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:04.150 [2024-07-25 12:31:19.005963] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:04.150 [2024-07-25 12:31:19.006532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c96f30 (107): Transport endpoint is not connected 00:24:04.150 [2024-07-25 12:31:19.007520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c96f30 (9): Bad file descriptor 00:24:04.150 [2024-07-25 12:31:19.008517] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:04.150 [2024-07-25 12:31:19.008551] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:04.150 [2024-07-25 12:31:19.008562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:04.150 2024/07/25 12:31:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:04.150 request: 00:24:04.150 { 00:24:04.150 "method": "bdev_nvme_attach_controller", 00:24:04.150 "params": { 00:24:04.150 "name": "nvme0", 00:24:04.150 "trtype": "tcp", 00:24:04.150 "traddr": "127.0.0.1", 00:24:04.150 "adrfam": "ipv4", 00:24:04.150 "trsvcid": "4420", 00:24:04.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:04.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:04.150 "prchk_reftag": false, 00:24:04.150 "prchk_guard": false, 00:24:04.150 "hdgst": false, 00:24:04.150 "ddgst": false, 00:24:04.150 "psk": "key1" 00:24:04.150 } 00:24:04.150 } 00:24:04.150 Got JSON-RPC error response 00:24:04.150 GoRPCClient: error on JSON-RPC call 00:24:04.408 12:31:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:04.408 12:31:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:04.408 12:31:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:04.408 12:31:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:04.408 12:31:19 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:24:04.408 12:31:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:04.408 12:31:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:04.408 12:31:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:04.408 12:31:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:04.408 12:31:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:04.665 12:31:19 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:04.665 12:31:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:24:04.665 12:31:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:04.666 12:31:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:04.666 12:31:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:04.666 12:31:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:04.666 12:31:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:04.923 12:31:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:04.923 12:31:19 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:04.923 12:31:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:05.181 12:31:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:05.181 12:31:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:05.438 12:31:20 keyring_file -- keyring/file.sh@77 -- # jq length 00:24:05.438 12:31:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:05.438 12:31:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:05.696 12:31:20 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:05.696 12:31:20 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.NIkDH1wi9Z 00:24:05.696 12:31:20 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIkDH1wi9Z 00:24:05.696 12:31:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:05.696 12:31:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIkDH1wi9Z 00:24:05.696 12:31:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:05.696 12:31:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:05.696 12:31:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:05.696 12:31:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:05.696 12:31:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIkDH1wi9Z 00:24:05.696 12:31:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NIkDH1wi9Z 00:24:05.696 [2024-07-25 12:31:20.558977] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NIkDH1wi9Z': 0100660 00:24:05.696 [2024-07-25 12:31:20.559033] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:05.952 2024/07/25 12:31:20 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.NIkDH1wi9Z], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:24:05.952 request: 00:24:05.952 { 00:24:05.952 "method": "keyring_file_add_key", 00:24:05.952 "params": { 00:24:05.952 "name": "key0", 00:24:05.952 "path": "/tmp/tmp.NIkDH1wi9Z" 00:24:05.952 } 00:24:05.952 } 00:24:05.952 Got JSON-RPC error response 00:24:05.952 GoRPCClient: error on JSON-RPC call 00:24:05.952 12:31:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:05.952 12:31:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:05.952 12:31:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:05.952 12:31:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:05.952 12:31:20 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.NIkDH1wi9Z 00:24:05.952 12:31:20 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NIkDH1wi9Z 00:24:05.952 12:31:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NIkDH1wi9Z 00:24:05.952 12:31:20 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.NIkDH1wi9Z 00:24:05.952 12:31:20 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:24:05.952 12:31:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:05.952 12:31:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:05.952 12:31:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:05.952 12:31:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:05.952 12:31:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:06.546 12:31:21 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:06.546 12:31:21 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:06.546 12:31:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:06.546 [2024-07-25 12:31:21.315183] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.NIkDH1wi9Z': No such file or directory 00:24:06.546 [2024-07-25 12:31:21.315232] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:06.546 [2024-07-25 12:31:21.315272] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:06.546 [2024-07-25 12:31:21.315281] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:06.546 [2024-07-25 12:31:21.315289] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:06.546 2024/07/25 12:31:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:24:06.546 request: 00:24:06.546 { 00:24:06.546 "method": "bdev_nvme_attach_controller", 00:24:06.546 "params": { 00:24:06.546 "name": "nvme0", 00:24:06.546 "trtype": "tcp", 00:24:06.546 "traddr": "127.0.0.1", 00:24:06.546 "adrfam": "ipv4", 00:24:06.546 "trsvcid": "4420", 00:24:06.546 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.546 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:06.546 "prchk_reftag": false, 00:24:06.546 "prchk_guard": false, 00:24:06.546 "hdgst": false, 00:24:06.546 "ddgst": false, 00:24:06.546 "psk": "key0" 00:24:06.546 } 00:24:06.546 } 00:24:06.546 Got JSON-RPC error response 00:24:06.546 GoRPCClient: error on JSON-RPC call 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:06.546 12:31:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:06.546 12:31:21 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:06.546 12:31:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:06.813 12:31:21 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:06.813 12:31:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:06.813 12:31:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:06.813 12:31:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:06.813 12:31:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:06.813 12:31:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:06.813 12:31:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9QQEsf8Pvt 00:24:06.813 12:31:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:06.813 12:31:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:06.813 12:31:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:06.813 12:31:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:06.813 12:31:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:06.813 12:31:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:06.813 12:31:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:06.813 12:31:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9QQEsf8Pvt 00:24:06.813 12:31:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9QQEsf8Pvt 00:24:06.813 12:31:21 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.9QQEsf8Pvt 00:24:06.813 12:31:21 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9QQEsf8Pvt 00:24:06.813 12:31:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9QQEsf8Pvt 00:24:07.071 12:31:21 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:07.071 12:31:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:07.328 nvme0n1 00:24:07.328 12:31:22 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:24:07.328 12:31:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:07.328 12:31:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:07.328 12:31:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.328 12:31:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.328 12:31:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:07.586 12:31:22 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:07.586 12:31:22 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:07.586 12:31:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:07.844 12:31:22 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:24:07.844 12:31:22 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:24:07.844 12:31:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.844 12:31:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.844 12:31:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:08.103 12:31:22 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:08.103 12:31:22 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:24:08.103 12:31:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:08.103 12:31:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:08.103 12:31:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:08.103 12:31:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:08.103 12:31:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:08.360 12:31:23 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:08.360 12:31:23 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:08.360 12:31:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:08.923 12:31:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:08.923 12:31:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:08.923 12:31:23 keyring_file -- keyring/file.sh@104 -- # jq length 00:24:08.923 12:31:23 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:08.923 12:31:23 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9QQEsf8Pvt 00:24:08.923 12:31:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9QQEsf8Pvt 00:24:09.488 12:31:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.PARqRXsaAe 00:24:09.488 12:31:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.PARqRXsaAe 00:24:09.744 12:31:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:09.745 12:31:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:10.002 nvme0n1 00:24:10.002 12:31:24 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:10.002 12:31:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:10.260 12:31:24 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:10.260 "subsystems": [ 00:24:10.260 { 00:24:10.260 "subsystem": "keyring", 00:24:10.260 "config": [ 00:24:10.260 { 00:24:10.260 "method": "keyring_file_add_key", 00:24:10.260 "params": { 00:24:10.260 "name": "key0", 00:24:10.260 "path": "/tmp/tmp.9QQEsf8Pvt" 00:24:10.260 } 00:24:10.260 }, 00:24:10.260 { 00:24:10.260 "method": "keyring_file_add_key", 00:24:10.260 "params": { 00:24:10.260 "name": "key1", 00:24:10.260 "path": "/tmp/tmp.PARqRXsaAe" 00:24:10.260 } 00:24:10.260 } 00:24:10.260 ] 00:24:10.260 }, 00:24:10.260 { 00:24:10.260 "subsystem": "iobuf", 00:24:10.260 "config": [ 00:24:10.260 { 00:24:10.260 "method": "iobuf_set_options", 00:24:10.260 "params": { 00:24:10.260 "large_bufsize": 135168, 00:24:10.260 "large_pool_count": 1024, 00:24:10.260 "small_bufsize": 8192, 00:24:10.260 "small_pool_count": 8192 00:24:10.260 } 00:24:10.260 } 00:24:10.260 ] 00:24:10.260 }, 00:24:10.260 { 00:24:10.260 "subsystem": "sock", 00:24:10.260 "config": [ 00:24:10.260 { 00:24:10.260 "method": "sock_set_default_impl", 00:24:10.260 "params": { 00:24:10.260 "impl_name": "posix" 00:24:10.260 } 00:24:10.260 }, 00:24:10.260 { 00:24:10.260 "method": "sock_impl_set_options", 00:24:10.260 "params": { 00:24:10.260 "enable_ktls": false, 00:24:10.260 "enable_placement_id": 0, 00:24:10.260 "enable_quickack": false, 00:24:10.260 "enable_recv_pipe": true, 00:24:10.260 "enable_zerocopy_send_client": false, 00:24:10.260 "enable_zerocopy_send_server": true, 00:24:10.260 "impl_name": "ssl", 00:24:10.260 "recv_buf_size": 4096, 00:24:10.260 "send_buf_size": 4096, 00:24:10.260 "tls_version": 0, 00:24:10.260 "zerocopy_threshold": 0 00:24:10.260 } 00:24:10.260 }, 00:24:10.260 { 00:24:10.260 "method": "sock_impl_set_options", 00:24:10.260 "params": { 00:24:10.260 "enable_ktls": false, 00:24:10.261 "enable_placement_id": 0, 00:24:10.261 "enable_quickack": false, 00:24:10.261 "enable_recv_pipe": true, 00:24:10.261 "enable_zerocopy_send_client": false, 00:24:10.261 "enable_zerocopy_send_server": true, 00:24:10.261 "impl_name": "posix", 00:24:10.261 "recv_buf_size": 2097152, 00:24:10.261 "send_buf_size": 2097152, 00:24:10.261 "tls_version": 0, 00:24:10.261 "zerocopy_threshold": 0 00:24:10.261 } 00:24:10.261 } 00:24:10.261 ] 00:24:10.261 }, 00:24:10.261 { 00:24:10.261 "subsystem": "vmd", 00:24:10.261 "config": [] 00:24:10.261 }, 00:24:10.261 { 00:24:10.261 "subsystem": "accel", 00:24:10.261 "config": [ 00:24:10.261 { 00:24:10.261 "method": "accel_set_options", 00:24:10.261 "params": { 00:24:10.261 "buf_count": 2048, 00:24:10.261 "large_cache_size": 16, 00:24:10.261 "sequence_count": 2048, 00:24:10.261 "small_cache_size": 128, 00:24:10.261 "task_count": 2048 00:24:10.261 } 00:24:10.261 } 00:24:10.261 ] 00:24:10.261 }, 00:24:10.261 { 00:24:10.261 "subsystem": "bdev", 00:24:10.261 "config": [ 00:24:10.261 { 00:24:10.261 "method": "bdev_set_options", 00:24:10.261 "params": { 00:24:10.261 "bdev_auto_examine": true, 00:24:10.261 "bdev_io_cache_size": 256, 00:24:10.261 "bdev_io_pool_size": 65535, 00:24:10.261 "iobuf_large_cache_size": 16, 00:24:10.261 "iobuf_small_cache_size": 128 00:24:10.261 } 00:24:10.261 }, 00:24:10.261 { 00:24:10.261 "method": "bdev_raid_set_options", 00:24:10.261 "params": { 00:24:10.261 "process_max_bandwidth_mb_sec": 0, 00:24:10.261 "process_window_size_kb": 1024 00:24:10.261 } 00:24:10.261 }, 00:24:10.261 { 00:24:10.261 "method": "bdev_iscsi_set_options", 00:24:10.261 "params": { 00:24:10.261 "timeout_sec": 30 00:24:10.261 } 00:24:10.261 }, 00:24:10.261 { 00:24:10.261 "method": "bdev_nvme_set_options", 00:24:10.261 "params": { 00:24:10.261 "action_on_timeout": "none", 00:24:10.261 "allow_accel_sequence": false, 00:24:10.261 "arbitration_burst": 0, 00:24:10.261 "bdev_retry_count": 3, 00:24:10.261 "ctrlr_loss_timeout_sec": 0, 00:24:10.261 "delay_cmd_submit": true, 00:24:10.261 "dhchap_dhgroups": [ 00:24:10.261 "null", 00:24:10.261 "ffdhe2048", 00:24:10.261 "ffdhe3072", 00:24:10.261 "ffdhe4096", 00:24:10.261 "ffdhe6144", 00:24:10.261 "ffdhe8192" 00:24:10.261 ], 00:24:10.261 "dhchap_digests": [ 00:24:10.261 "sha256", 00:24:10.261 "sha384", 00:24:10.261 "sha512" 00:24:10.261 ], 00:24:10.261 "disable_auto_failback": false, 00:24:10.261 "fast_io_fail_timeout_sec": 0, 00:24:10.261 "generate_uuids": false, 00:24:10.261 "high_priority_weight": 0, 00:24:10.261 "io_path_stat": false, 00:24:10.261 "io_queue_requests": 512, 00:24:10.261 "keep_alive_timeout_ms": 10000, 00:24:10.261 "low_priority_weight": 0, 00:24:10.261 "medium_priority_weight": 0, 00:24:10.261 "nvme_adminq_poll_period_us": 10000, 00:24:10.261 "nvme_error_stat": false, 00:24:10.261 "nvme_ioq_poll_period_us": 0, 00:24:10.261 "rdma_cm_event_timeout_ms": 0, 00:24:10.261 "rdma_max_cq_size": 0, 00:24:10.261 "rdma_srq_size": 0, 00:24:10.261 "reconnect_delay_sec": 0, 00:24:10.261 "timeout_admin_us": 0, 00:24:10.261 "timeout_us": 0, 00:24:10.261 "transport_ack_timeout": 0, 00:24:10.261 "transport_retry_count": 4, 00:24:10.261 "transport_tos": 0 00:24:10.261 } 00:24:10.261 }, 00:24:10.261 { 00:24:10.261 "method": "bdev_nvme_attach_controller", 00:24:10.261 "params": { 00:24:10.261 "adrfam": "IPv4", 00:24:10.261 "ctrlr_loss_timeout_sec": 0, 00:24:10.261 "ddgst": false, 00:24:10.261 "fast_io_fail_timeout_sec": 0, 00:24:10.261 "hdgst": false, 00:24:10.261 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:10.261 "name": "nvme0", 00:24:10.261 "prchk_guard": false, 00:24:10.261 "prchk_reftag": false, 00:24:10.261 "psk": "key0", 00:24:10.261 "reconnect_delay_sec": 0, 00:24:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.261 "traddr": "127.0.0.1", 00:24:10.261 "trsvcid": "4420", 00:24:10.261 "trtype": "TCP" 00:24:10.261 } 00:24:10.261 }, 00:24:10.261 { 00:24:10.261 "method": "bdev_nvme_set_hotplug", 00:24:10.261 "params": { 00:24:10.261 "enable": false, 00:24:10.261 "period_us": 100000 00:24:10.261 } 00:24:10.261 }, 00:24:10.261 { 00:24:10.261 "method": "bdev_wait_for_examine" 00:24:10.261 } 00:24:10.261 ] 00:24:10.261 }, 00:24:10.261 { 00:24:10.261 "subsystem": "nbd", 00:24:10.261 "config": [] 00:24:10.261 } 00:24:10.261 ] 00:24:10.261 }' 00:24:10.261 12:31:24 keyring_file -- keyring/file.sh@114 -- # killprocess 99627 00:24:10.261 12:31:24 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99627 ']' 00:24:10.261 12:31:24 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99627 00:24:10.261 12:31:24 keyring_file -- common/autotest_common.sh@955 -- # uname 00:24:10.261 12:31:24 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:10.261 12:31:24 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99627 00:24:10.261 killing process with pid 99627 00:24:10.261 Received shutdown signal, test time was about 1.000000 seconds 00:24:10.261 00:24:10.261 Latency(us) 00:24:10.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.261 =================================================================================================================== 00:24:10.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.261 12:31:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:10.261 12:31:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:10.262 12:31:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99627' 00:24:10.262 12:31:25 keyring_file -- common/autotest_common.sh@969 -- # kill 99627 00:24:10.262 12:31:25 keyring_file -- common/autotest_common.sh@974 -- # wait 99627 00:24:10.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:10.521 12:31:25 keyring_file -- keyring/file.sh@117 -- # bperfpid=100097 00:24:10.521 12:31:25 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100097 /var/tmp/bperf.sock 00:24:10.521 12:31:25 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:10.521 12:31:25 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100097 ']' 00:24:10.521 12:31:25 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:10.521 12:31:25 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.521 12:31:25 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:10.521 "subsystems": [ 00:24:10.521 { 00:24:10.521 "subsystem": "keyring", 00:24:10.521 "config": [ 00:24:10.521 { 00:24:10.521 "method": "keyring_file_add_key", 00:24:10.521 "params": { 00:24:10.521 "name": "key0", 00:24:10.521 "path": "/tmp/tmp.9QQEsf8Pvt" 00:24:10.521 } 00:24:10.521 }, 00:24:10.521 { 00:24:10.521 "method": "keyring_file_add_key", 00:24:10.521 "params": { 00:24:10.521 "name": "key1", 00:24:10.521 "path": "/tmp/tmp.PARqRXsaAe" 00:24:10.521 } 00:24:10.521 } 00:24:10.521 ] 00:24:10.521 }, 00:24:10.521 { 00:24:10.521 "subsystem": "iobuf", 00:24:10.521 "config": [ 00:24:10.521 { 00:24:10.521 "method": "iobuf_set_options", 00:24:10.521 "params": { 00:24:10.521 "large_bufsize": 135168, 00:24:10.521 "large_pool_count": 1024, 00:24:10.521 "small_bufsize": 8192, 00:24:10.521 "small_pool_count": 8192 00:24:10.521 } 00:24:10.521 } 00:24:10.521 ] 00:24:10.521 }, 00:24:10.521 { 00:24:10.521 "subsystem": "sock", 00:24:10.521 "config": [ 00:24:10.521 { 00:24:10.521 "method": "sock_set_default_impl", 00:24:10.521 "params": { 00:24:10.521 "impl_name": "posix" 00:24:10.521 } 00:24:10.521 }, 00:24:10.521 { 00:24:10.521 "method": "sock_impl_set_options", 00:24:10.521 "params": { 00:24:10.521 "enable_ktls": false, 00:24:10.521 "enable_placement_id": 0, 00:24:10.521 "enable_quickack": false, 00:24:10.521 "enable_recv_pipe": true, 00:24:10.521 "enable_zerocopy_send_client": false, 00:24:10.521 "enable_zerocopy_send_server": true, 00:24:10.521 "impl_name": "ssl", 00:24:10.521 "recv_buf_size": 4096, 00:24:10.521 "send_buf_size": 4096, 00:24:10.521 "tls_version": 0, 00:24:10.521 "zerocopy_threshold": 0 00:24:10.521 } 00:24:10.521 }, 00:24:10.521 { 00:24:10.521 "method": "sock_impl_set_options", 00:24:10.521 "params": { 00:24:10.521 "enable_ktls": false, 00:24:10.521 "enable_placement_id": 0, 00:24:10.521 "enable_quickack": false, 00:24:10.521 "enable_recv_pipe": true, 00:24:10.521 "enable_zerocopy_send_client": false, 00:24:10.521 "enable_zerocopy_send_server": true, 00:24:10.521 "impl_name": "posix", 00:24:10.521 "recv_buf_size": 2097152, 00:24:10.521 "send_buf_size": 2097152, 00:24:10.521 "tls_version": 0, 00:24:10.521 "zerocopy_threshold": 0 00:24:10.521 } 00:24:10.521 } 00:24:10.521 ] 00:24:10.521 }, 00:24:10.521 { 00:24:10.521 "subsystem": "vmd", 00:24:10.521 "config": [] 00:24:10.521 }, 00:24:10.521 { 00:24:10.521 "subsystem": "accel", 00:24:10.521 "config": [ 00:24:10.521 { 00:24:10.521 "method": "accel_set_options", 00:24:10.521 "params": { 00:24:10.521 "buf_count": 2048, 00:24:10.521 "large_cache_size": 16, 00:24:10.521 "sequence_count": 2048, 00:24:10.521 "small_cache_size": 128, 00:24:10.521 "task_count": 2048 00:24:10.521 } 00:24:10.521 } 00:24:10.521 ] 00:24:10.521 }, 00:24:10.521 { 00:24:10.521 "subsystem": "bdev", 00:24:10.521 "config": [ 00:24:10.521 { 00:24:10.521 "method": "bdev_set_options", 00:24:10.521 "params": { 00:24:10.521 "bdev_auto_examine": true, 00:24:10.521 "bdev_io_cache_size": 256, 00:24:10.521 "bdev_io_pool_size": 65535, 00:24:10.521 "iobuf_large_cache_size": 16, 00:24:10.521 "iobuf_small_cache_size": 128 00:24:10.521 } 00:24:10.521 }, 00:24:10.521 { 00:24:10.521 "method": "bdev_raid_set_options", 00:24:10.521 "params": { 00:24:10.521 "process_max_bandwidth_mb_sec": 0, 00:24:10.522 "process_window_size_kb": 1024 00:24:10.522 } 00:24:10.522 }, 00:24:10.522 { 00:24:10.522 "method": "bdev_iscsi_set_options", 00:24:10.522 "params": { 00:24:10.522 "timeout_sec": 30 00:24:10.522 } 00:24:10.522 }, 00:24:10.522 { 00:24:10.522 "method": "bdev_nvme_set_options", 00:24:10.522 "params": { 00:24:10.522 "action_on_timeout": "none", 00:24:10.522 "allow_accel_sequence": false, 00:24:10.522 "arbitration_burst": 0, 00:24:10.522 "bdev_retry_count": 3, 00:24:10.522 "ctrlr_loss_timeout_sec": 0, 00:24:10.522 "delay_cmd_submit": true, 00:24:10.522 "dhchap_dhgroups": [ 00:24:10.522 "null", 00:24:10.522 "ffdhe2048", 00:24:10.522 "ffdhe3072", 00:24:10.522 "ffdhe4096", 00:24:10.522 "ffdhe6144", 00:24:10.522 "ffdhe8192" 00:24:10.522 ], 00:24:10.522 "dhchap_digests": [ 00:24:10.522 "sha256", 00:24:10.522 "sha384", 00:24:10.522 "sha512" 00:24:10.522 ], 00:24:10.522 "disable_auto_failback": false, 00:24:10.522 "fast_io_fail_timeout_sec": 0, 00:24:10.522 "generate_uuids": false, 00:24:10.522 "high_priority_weight": 0, 00:24:10.522 "io_path_stat": false, 00:24:10.522 "io_queue_requests": 512, 00:24:10.522 "keep_alive_timeout_ms": 10000, 00:24:10.522 "low_priority_weight": 0, 00:24:10.522 "medium_priority_weight": 0, 00:24:10.522 "nvme_adminq_poll_period_us": 10000, 00:24:10.522 "nvme_error_stat": false, 00:24:10.522 "nvme_ioq_poll_period_us": 0, 00:24:10.522 "rdma_cm_event_timeout_ms": 0, 00:24:10.522 "rdma_max_cq_size": 0, 00:24:10.522 "rdma_srq_size": 0, 00:24:10.522 "reconnect_delay_sec": 0, 00:24:10.522 "timeout_admin_us": 0, 00:24:10.522 "timeout_us": 0, 00:24:10.522 "transport_ack_timeout": 0, 00:24:10.522 "transport_retry_count": 4, 00:24:10.522 "transport_tos": 0 00:24:10.522 } 00:24:10.522 }, 00:24:10.522 { 00:24:10.522 "method": "bdev_nvme_attach_controller", 00:24:10.522 "params": { 00:24:10.522 "adrfam": "IPv4", 00:24:10.522 "ctrlr_loss_timeout_sec": 0, 00:24:10.522 "ddgst": false, 00:24:10.522 "fast_io_fail_timeout_sec": 0, 00:24:10.522 "hdgst": false, 00:24:10.522 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:10.522 "name": "nvme0", 00:24:10.522 "prchk_guard": false, 00:24:10.522 "prchk_reftag": false, 00:24:10.522 "psk": "key0", 00:24:10.522 "reconnect_delay_sec": 0, 00:24:10.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.522 "traddr": "127.0.0.1", 00:24:10.522 "trsvcid": "4420", 00:24:10.522 "trtype": "TCP" 00:24:10.522 } 00:24:10.522 }, 00:24:10.522 { 00:24:10.522 "method": "bdev_nvme_set_hotplug", 00:24:10.522 "params": { 00:24:10.522 "enable": false, 00:24:10.522 "period_us": 100000 00:24:10.522 } 00:24:10.522 }, 00:24:10.522 { 00:24:10.522 "method": "bdev_wait_for_examine" 00:24:10.522 } 00:24:10.522 ] 00:24:10.522 }, 00:24:10.522 { 00:24:10.522 "subsystem": "nbd", 00:24:10.522 "config": [] 00:24:10.522 } 00:24:10.522 ] 00:24:10.522 }' 00:24:10.522 12:31:25 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:10.522 12:31:25 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.522 12:31:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:10.522 [2024-07-25 12:31:25.315519] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:24:10.522 [2024-07-25 12:31:25.315839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100097 ] 00:24:10.778 [2024-07-25 12:31:25.460439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.778 [2024-07-25 12:31:25.572398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.035 [2024-07-25 12:31:25.759286] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.599 12:31:26 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.599 12:31:26 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:24:11.599 12:31:26 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:11.599 12:31:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.599 12:31:26 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:11.857 12:31:26 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:11.857 12:31:26 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:11.857 12:31:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:11.857 12:31:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:11.857 12:31:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:11.857 12:31:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.857 12:31:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:12.115 12:31:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:12.115 12:31:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:12.115 12:31:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:12.115 12:31:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:12.116 12:31:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:12.116 12:31:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:12.116 12:31:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:12.375 12:31:27 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:12.375 12:31:27 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:12.375 12:31:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:12.375 12:31:27 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:12.633 12:31:27 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:12.633 12:31:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:12.633 12:31:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.9QQEsf8Pvt /tmp/tmp.PARqRXsaAe 00:24:12.633 12:31:27 keyring_file -- keyring/file.sh@20 -- # killprocess 100097 00:24:12.633 12:31:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100097 ']' 00:24:12.633 12:31:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100097 00:24:12.633 12:31:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:24:12.634 12:31:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.634 12:31:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100097 00:24:12.634 12:31:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:12.634 killing process with pid 100097 00:24:12.634 Received shutdown signal, test time was about 1.000000 seconds 00:24:12.634 00:24:12.634 Latency(us) 00:24:12.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.634 =================================================================================================================== 00:24:12.634 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:12.634 12:31:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:12.634 12:31:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100097' 00:24:12.634 12:31:27 keyring_file -- common/autotest_common.sh@969 -- # kill 100097 00:24:12.634 12:31:27 keyring_file -- common/autotest_common.sh@974 -- # wait 100097 00:24:12.892 12:31:27 keyring_file -- keyring/file.sh@21 -- # killprocess 99592 00:24:12.892 12:31:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99592 ']' 00:24:12.892 12:31:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99592 00:24:12.892 12:31:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:24:12.892 12:31:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.892 12:31:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99592 00:24:12.892 killing process with pid 99592 00:24:12.892 12:31:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:12.892 12:31:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:12.892 12:31:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99592' 00:24:12.892 12:31:27 keyring_file -- common/autotest_common.sh@969 -- # kill 99592 00:24:12.892 [2024-07-25 12:31:27.749854] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:12.892 12:31:27 keyring_file -- common/autotest_common.sh@974 -- # wait 99592 00:24:13.458 00:24:13.458 real 0m16.259s 00:24:13.458 user 0m40.400s 00:24:13.458 sys 0m3.263s 00:24:13.459 12:31:28 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:13.459 12:31:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:13.459 ************************************ 00:24:13.459 END TEST keyring_file 00:24:13.459 ************************************ 00:24:13.459 12:31:28 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:24:13.459 12:31:28 -- spdk/autotest.sh@301 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:13.459 12:31:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:13.459 12:31:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:13.459 12:31:28 -- common/autotest_common.sh@10 -- # set +x 00:24:13.459 ************************************ 00:24:13.459 START TEST keyring_linux 00:24:13.459 ************************************ 00:24:13.459 12:31:28 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:13.459 * Looking for test storage... 00:24:13.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:13.459 12:31:28 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:13.459 12:31:28 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:12aea35b-e003-4c84-a908-b9044a3380d2 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=12aea35b-e003-4c84-a908-b9044a3380d2 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:13.459 12:31:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.717 12:31:28 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:13.717 12:31:28 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.717 12:31:28 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.717 12:31:28 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.717 12:31:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.717 12:31:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.717 12:31:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.717 12:31:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:13.717 12:31:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:13.718 12:31:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:13.718 12:31:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:13.718 12:31:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:13.718 12:31:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:13.718 12:31:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:13.718 12:31:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:13.718 /tmp/:spdk-test:key0 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:13.718 12:31:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:13.718 12:31:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:13.718 /tmp/:spdk-test:key1 00:24:13.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.718 12:31:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:13.718 12:31:28 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:13.718 12:31:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100253 00:24:13.718 12:31:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100253 00:24:13.718 12:31:28 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100253 ']' 00:24:13.718 12:31:28 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.718 12:31:28 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.718 12:31:28 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.718 12:31:28 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.718 12:31:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:13.718 [2024-07-25 12:31:28.484281] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:24:13.718 [2024-07-25 12:31:28.484636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100253 ] 00:24:13.976 [2024-07-25 12:31:28.619733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.976 [2024-07-25 12:31:28.744066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:24:14.911 12:31:29 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:14.911 [2024-07-25 12:31:29.550124] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.911 null0 00:24:14.911 [2024-07-25 12:31:29.582085] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.911 [2024-07-25 12:31:29.582494] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.911 12:31:29 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:14.911 182388078 00:24:14.911 12:31:29 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:14.911 257964454 00:24:14.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:14.911 12:31:29 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100289 00:24:14.911 12:31:29 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:14.911 12:31:29 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100289 /var/tmp/bperf.sock 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100289 ']' 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.911 12:31:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:14.911 [2024-07-25 12:31:29.665066] Starting SPDK v24.09-pre git sha1 5038b8b39 / DPDK 24.03.0 initialization... 00:24:14.911 [2024-07-25 12:31:29.665440] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100289 ] 00:24:15.173 [2024-07-25 12:31:29.805699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.173 [2024-07-25 12:31:29.932130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.122 12:31:30 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.122 12:31:30 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:24:16.122 12:31:30 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:16.122 12:31:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:16.122 12:31:30 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:16.122 12:31:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:16.411 12:31:31 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:16.411 12:31:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:16.983 [2024-07-25 12:31:31.552258] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.983 nvme0n1 00:24:16.983 12:31:31 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:16.983 12:31:31 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:16.983 12:31:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:16.983 12:31:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:16.983 12:31:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:16.983 12:31:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.242 12:31:31 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:17.242 12:31:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:17.242 12:31:31 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:17.242 12:31:31 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:17.242 12:31:31 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:17.242 12:31:31 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:17.242 12:31:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.500 12:31:32 keyring_linux -- keyring/linux.sh@25 -- # sn=182388078 00:24:17.500 12:31:32 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:17.500 12:31:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:17.500 12:31:32 keyring_linux -- keyring/linux.sh@26 -- # [[ 182388078 == \1\8\2\3\8\8\0\7\8 ]] 00:24:17.500 12:31:32 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 182388078 00:24:17.500 12:31:32 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:17.500 12:31:32 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:17.500 Running I/O for 1 seconds... 00:24:18.876 00:24:18.876 Latency(us) 00:24:18.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.876 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:18.876 nvme0n1 : 1.01 11001.40 42.97 0.00 0.00 11558.58 3723.64 13524.25 00:24:18.876 =================================================================================================================== 00:24:18.876 Total : 11001.40 42.97 0.00 0.00 11558.58 3723.64 13524.25 00:24:18.876 0 00:24:18.876 12:31:33 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:18.876 12:31:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:18.876 12:31:33 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:18.876 12:31:33 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:18.876 12:31:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:18.876 12:31:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:18.876 12:31:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:18.876 12:31:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:19.135 12:31:33 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:19.135 12:31:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:19.135 12:31:33 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:19.135 12:31:33 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:19.135 12:31:33 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:24:19.135 12:31:33 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:19.135 12:31:33 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:19.135 12:31:33 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:19.135 12:31:33 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:19.135 12:31:33 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:19.135 12:31:33 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:19.135 12:31:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:19.394 [2024-07-25 12:31:34.176343] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:19.394 [2024-07-25 12:31:34.176425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2097ea0 (107): Transport endpoint is not connected 00:24:19.394 [2024-07-25 12:31:34.177414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2097ea0 (9): Bad file descriptor 00:24:19.394 [2024-07-25 12:31:34.178410] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.394 [2024-07-25 12:31:34.178430] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:19.394 [2024-07-25 12:31:34.178441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.394 2024/07/25 12:31:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:19.394 request: 00:24:19.394 { 00:24:19.394 "method": "bdev_nvme_attach_controller", 00:24:19.394 "params": { 00:24:19.394 "name": "nvme0", 00:24:19.394 "trtype": "tcp", 00:24:19.394 "traddr": "127.0.0.1", 00:24:19.394 "adrfam": "ipv4", 00:24:19.394 "trsvcid": "4420", 00:24:19.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:19.394 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:19.394 "prchk_reftag": false, 00:24:19.394 "prchk_guard": false, 00:24:19.394 "hdgst": false, 00:24:19.394 "ddgst": false, 00:24:19.394 "psk": ":spdk-test:key1" 00:24:19.394 } 00:24:19.394 } 00:24:19.394 Got JSON-RPC error response 00:24:19.394 GoRPCClient: error on JSON-RPC call 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@33 -- # sn=182388078 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 182388078 00:24:19.394 1 links removed 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@33 -- # sn=257964454 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 257964454 00:24:19.394 1 links removed 00:24:19.394 12:31:34 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100289 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100289 ']' 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100289 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100289 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100289' 00:24:19.394 killing process with pid 100289 00:24:19.394 12:31:34 keyring_linux -- common/autotest_common.sh@969 -- # kill 100289 00:24:19.394 Received shutdown signal, test time was about 1.000000 seconds 00:24:19.395 00:24:19.395 Latency(us) 00:24:19.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.395 =================================================================================================================== 00:24:19.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.395 12:31:34 keyring_linux -- common/autotest_common.sh@974 -- # wait 100289 00:24:19.683 12:31:34 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100253 00:24:19.683 12:31:34 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100253 ']' 00:24:19.683 12:31:34 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100253 00:24:19.683 12:31:34 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:24:19.683 12:31:34 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.683 12:31:34 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100253 00:24:19.683 12:31:34 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:19.683 12:31:34 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:19.683 killing process with pid 100253 00:24:19.683 12:31:34 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100253' 00:24:19.683 12:31:34 keyring_linux -- common/autotest_common.sh@969 -- # kill 100253 00:24:19.683 12:31:34 keyring_linux -- common/autotest_common.sh@974 -- # wait 100253 00:24:20.249 00:24:20.249 real 0m6.678s 00:24:20.249 user 0m13.143s 00:24:20.249 sys 0m1.589s 00:24:20.249 12:31:34 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.249 12:31:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:20.249 ************************************ 00:24:20.249 END TEST keyring_linux 00:24:20.249 ************************************ 00:24:20.250 12:31:34 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:24:20.250 12:31:34 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:24:20.250 12:31:34 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:24:20.250 12:31:34 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:24:20.250 12:31:34 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:24:20.250 12:31:34 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:24:20.250 12:31:34 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:24:20.250 12:31:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:20.250 12:31:34 -- common/autotest_common.sh@10 -- # set +x 00:24:20.250 12:31:34 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:24:20.250 12:31:34 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:24:20.250 12:31:34 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:24:20.250 12:31:34 -- common/autotest_common.sh@10 -- # set +x 00:24:21.625 INFO: APP EXITING 00:24:21.625 INFO: killing all VMs 00:24:21.625 INFO: killing vhost app 00:24:21.625 INFO: EXIT DONE 00:24:22.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:22.559 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:22.559 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:23.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:23.127 Cleaning 00:24:23.127 Removing: /var/run/dpdk/spdk0/config 00:24:23.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:23.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:23.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:23.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:23.127 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:23.127 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:23.127 Removing: /var/run/dpdk/spdk1/config 00:24:23.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:23.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:23.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:23.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:23.127 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:23.127 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:23.127 Removing: /var/run/dpdk/spdk2/config 00:24:23.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:23.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:23.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:23.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:23.127 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:23.127 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:23.127 Removing: /var/run/dpdk/spdk3/config 00:24:23.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:23.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:23.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:23.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:23.386 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:23.386 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:23.386 Removing: /var/run/dpdk/spdk4/config 00:24:23.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:23.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:23.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:23.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:23.386 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:23.386 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:23.386 Removing: /dev/shm/nvmf_trace.0 00:24:23.386 Removing: /dev/shm/spdk_tgt_trace.pid60564 00:24:23.386 Removing: /var/run/dpdk/spdk0 00:24:23.386 Removing: /var/run/dpdk/spdk1 00:24:23.386 Removing: /var/run/dpdk/spdk2 00:24:23.386 Removing: /var/run/dpdk/spdk3 00:24:23.386 Removing: /var/run/dpdk/spdk4 00:24:23.386 Removing: /var/run/dpdk/spdk_pid100097 00:24:23.386 Removing: /var/run/dpdk/spdk_pid100253 00:24:23.386 Removing: /var/run/dpdk/spdk_pid100289 00:24:23.386 Removing: /var/run/dpdk/spdk_pid60419 00:24:23.386 Removing: /var/run/dpdk/spdk_pid60564 00:24:23.386 Removing: /var/run/dpdk/spdk_pid60825 00:24:23.386 Removing: /var/run/dpdk/spdk_pid60912 00:24:23.386 Removing: /var/run/dpdk/spdk_pid60957 00:24:23.386 Removing: /var/run/dpdk/spdk_pid61061 00:24:23.386 Removing: /var/run/dpdk/spdk_pid61091 00:24:23.386 Removing: /var/run/dpdk/spdk_pid61219 00:24:23.386 Removing: /var/run/dpdk/spdk_pid61494 00:24:23.386 Removing: /var/run/dpdk/spdk_pid61671 00:24:23.386 Removing: /var/run/dpdk/spdk_pid61742 00:24:23.386 Removing: /var/run/dpdk/spdk_pid61834 00:24:23.386 Removing: /var/run/dpdk/spdk_pid61929 00:24:23.386 Removing: /var/run/dpdk/spdk_pid61962 00:24:23.386 Removing: /var/run/dpdk/spdk_pid61998 00:24:23.386 Removing: /var/run/dpdk/spdk_pid62059 00:24:23.386 Removing: /var/run/dpdk/spdk_pid62155 00:24:23.386 Removing: /var/run/dpdk/spdk_pid62786 00:24:23.386 Removing: /var/run/dpdk/spdk_pid62850 00:24:23.386 Removing: /var/run/dpdk/spdk_pid62919 00:24:23.386 Removing: /var/run/dpdk/spdk_pid62947 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63026 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63054 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63133 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63161 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63218 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63248 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63294 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63324 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63475 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63506 00:24:23.386 Removing: /var/run/dpdk/spdk_pid63581 00:24:23.386 Removing: /var/run/dpdk/spdk_pid64019 00:24:23.386 Removing: /var/run/dpdk/spdk_pid64368 00:24:23.386 Removing: /var/run/dpdk/spdk_pid66774 00:24:23.386 Removing: /var/run/dpdk/spdk_pid66825 00:24:23.386 Removing: /var/run/dpdk/spdk_pid67148 00:24:23.386 Removing: /var/run/dpdk/spdk_pid67198 00:24:23.386 Removing: /var/run/dpdk/spdk_pid67561 00:24:23.386 Removing: /var/run/dpdk/spdk_pid68096 00:24:23.386 Removing: /var/run/dpdk/spdk_pid68545 00:24:23.386 Removing: /var/run/dpdk/spdk_pid69522 00:24:23.386 Removing: /var/run/dpdk/spdk_pid70508 00:24:23.386 Removing: /var/run/dpdk/spdk_pid70630 00:24:23.386 Removing: /var/run/dpdk/spdk_pid70693 00:24:23.386 Removing: /var/run/dpdk/spdk_pid72163 00:24:23.386 Removing: /var/run/dpdk/spdk_pid72459 00:24:23.386 Removing: /var/run/dpdk/spdk_pid75779 00:24:23.386 Removing: /var/run/dpdk/spdk_pid76155 00:24:23.387 Removing: /var/run/dpdk/spdk_pid76753 00:24:23.387 Removing: /var/run/dpdk/spdk_pid77157 00:24:23.387 Removing: /var/run/dpdk/spdk_pid82547 00:24:23.387 Removing: /var/run/dpdk/spdk_pid82978 00:24:23.387 Removing: /var/run/dpdk/spdk_pid83086 00:24:23.387 Removing: /var/run/dpdk/spdk_pid83238 00:24:23.387 Removing: /var/run/dpdk/spdk_pid83289 00:24:23.387 Removing: /var/run/dpdk/spdk_pid83329 00:24:23.387 Removing: /var/run/dpdk/spdk_pid83379 00:24:23.387 Removing: /var/run/dpdk/spdk_pid83544 00:24:23.387 Removing: /var/run/dpdk/spdk_pid83693 00:24:23.387 Removing: /var/run/dpdk/spdk_pid83963 00:24:23.387 Removing: /var/run/dpdk/spdk_pid84080 00:24:23.387 Removing: /var/run/dpdk/spdk_pid84333 00:24:23.387 Removing: /var/run/dpdk/spdk_pid84464 00:24:23.387 Removing: /var/run/dpdk/spdk_pid84600 00:24:23.387 Removing: /var/run/dpdk/spdk_pid84938 00:24:23.387 Removing: /var/run/dpdk/spdk_pid85388 00:24:23.387 Removing: /var/run/dpdk/spdk_pid85704 00:24:23.387 Removing: /var/run/dpdk/spdk_pid86200 00:24:23.387 Removing: /var/run/dpdk/spdk_pid86202 00:24:23.685 Removing: /var/run/dpdk/spdk_pid86543 00:24:23.685 Removing: /var/run/dpdk/spdk_pid86557 00:24:23.685 Removing: /var/run/dpdk/spdk_pid86583 00:24:23.685 Removing: /var/run/dpdk/spdk_pid86609 00:24:23.685 Removing: /var/run/dpdk/spdk_pid86615 00:24:23.685 Removing: /var/run/dpdk/spdk_pid86972 00:24:23.685 Removing: /var/run/dpdk/spdk_pid87020 00:24:23.685 Removing: /var/run/dpdk/spdk_pid87353 00:24:23.685 Removing: /var/run/dpdk/spdk_pid87609 00:24:23.685 Removing: /var/run/dpdk/spdk_pid88101 00:24:23.685 Removing: /var/run/dpdk/spdk_pid88686 00:24:23.685 Removing: /var/run/dpdk/spdk_pid90059 00:24:23.685 Removing: /var/run/dpdk/spdk_pid90658 00:24:23.685 Removing: /var/run/dpdk/spdk_pid90661 00:24:23.685 Removing: /var/run/dpdk/spdk_pid92612 00:24:23.685 Removing: /var/run/dpdk/spdk_pid92702 00:24:23.685 Removing: /var/run/dpdk/spdk_pid92794 00:24:23.685 Removing: /var/run/dpdk/spdk_pid92884 00:24:23.685 Removing: /var/run/dpdk/spdk_pid93047 00:24:23.685 Removing: /var/run/dpdk/spdk_pid93143 00:24:23.685 Removing: /var/run/dpdk/spdk_pid93229 00:24:23.685 Removing: /var/run/dpdk/spdk_pid93319 00:24:23.685 Removing: /var/run/dpdk/spdk_pid93671 00:24:23.685 Removing: /var/run/dpdk/spdk_pid94365 00:24:23.686 Removing: /var/run/dpdk/spdk_pid95724 00:24:23.686 Removing: /var/run/dpdk/spdk_pid95930 00:24:23.686 Removing: /var/run/dpdk/spdk_pid96217 00:24:23.686 Removing: /var/run/dpdk/spdk_pid96514 00:24:23.686 Removing: /var/run/dpdk/spdk_pid97073 00:24:23.686 Removing: /var/run/dpdk/spdk_pid97078 00:24:23.686 Removing: /var/run/dpdk/spdk_pid97437 00:24:23.686 Removing: /var/run/dpdk/spdk_pid97596 00:24:23.686 Removing: /var/run/dpdk/spdk_pid97753 00:24:23.686 Removing: /var/run/dpdk/spdk_pid97849 00:24:23.686 Removing: /var/run/dpdk/spdk_pid98000 00:24:23.686 Removing: /var/run/dpdk/spdk_pid98109 00:24:23.686 Removing: /var/run/dpdk/spdk_pid98776 00:24:23.686 Removing: /var/run/dpdk/spdk_pid98811 00:24:23.686 Removing: /var/run/dpdk/spdk_pid98847 00:24:23.686 Removing: /var/run/dpdk/spdk_pid99104 00:24:23.686 Removing: /var/run/dpdk/spdk_pid99136 00:24:23.686 Removing: /var/run/dpdk/spdk_pid99167 00:24:23.686 Removing: /var/run/dpdk/spdk_pid99592 00:24:23.686 Removing: /var/run/dpdk/spdk_pid99627 00:24:23.686 Clean 00:24:23.686 12:31:38 -- common/autotest_common.sh@1451 -- # return 0 00:24:23.686 12:31:38 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:24:23.686 12:31:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.686 12:31:38 -- common/autotest_common.sh@10 -- # set +x 00:24:23.686 12:31:38 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:24:23.686 12:31:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.686 12:31:38 -- common/autotest_common.sh@10 -- # set +x 00:24:23.686 12:31:38 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:23.686 12:31:38 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:23.686 12:31:38 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:23.686 12:31:38 -- spdk/autotest.sh@395 -- # hash lcov 00:24:23.686 12:31:38 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:23.960 12:31:38 -- spdk/autotest.sh@397 -- # hostname 00:24:23.960 12:31:38 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:23.960 geninfo: WARNING: invalid characters removed from testname! 00:24:50.516 12:32:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:53.047 12:32:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:55.578 12:32:10 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:58.860 12:32:13 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:01.390 12:32:15 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:03.920 12:32:18 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:06.450 12:32:20 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:06.450 12:32:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:06.450 12:32:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:06.450 12:32:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.450 12:32:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.450 12:32:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.450 12:32:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.450 12:32:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.450 12:32:20 -- paths/export.sh@5 -- $ export PATH 00:25:06.450 12:32:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.450 12:32:20 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:06.450 12:32:20 -- common/autobuild_common.sh@447 -- $ date +%s 00:25:06.450 12:32:20 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721910740.XXXXXX 00:25:06.450 12:32:21 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721910740.SysNsz 00:25:06.450 12:32:21 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:25:06.450 12:32:21 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:25:06.450 12:32:21 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:25:06.450 12:32:21 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:06.450 12:32:21 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:06.450 12:32:21 -- common/autobuild_common.sh@463 -- $ get_config_params 00:25:06.450 12:32:21 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:25:06.450 12:32:21 -- common/autotest_common.sh@10 -- $ set +x 00:25:06.450 12:32:21 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:25:06.450 12:32:21 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:25:06.450 12:32:21 -- pm/common@17 -- $ local monitor 00:25:06.450 12:32:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:06.450 12:32:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:06.450 12:32:21 -- pm/common@25 -- $ sleep 1 00:25:06.450 12:32:21 -- pm/common@21 -- $ date +%s 00:25:06.450 12:32:21 -- pm/common@21 -- $ date +%s 00:25:06.450 12:32:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721910741 00:25:06.450 12:32:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721910741 00:25:06.450 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721910741_collect-vmstat.pm.log 00:25:06.450 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721910741_collect-cpu-load.pm.log 00:25:07.384 12:32:22 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:25:07.384 12:32:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:07.384 12:32:22 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:07.384 12:32:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:07.384 12:32:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:07.384 12:32:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:07.384 12:32:22 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:07.384 12:32:22 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:07.384 12:32:22 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:07.384 12:32:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:07.384 12:32:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:07.384 12:32:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:07.384 12:32:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:07.384 12:32:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:07.384 12:32:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:07.384 12:32:22 -- pm/common@44 -- $ pid=101951 00:25:07.384 12:32:22 -- pm/common@50 -- $ kill -TERM 101951 00:25:07.384 12:32:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:07.384 12:32:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:07.384 12:32:22 -- pm/common@44 -- $ pid=101953 00:25:07.384 12:32:22 -- pm/common@50 -- $ kill -TERM 101953 00:25:07.384 + [[ -n 5161 ]] 00:25:07.384 + sudo kill 5161 00:25:07.394 [Pipeline] } 00:25:07.414 [Pipeline] // timeout 00:25:07.420 [Pipeline] } 00:25:07.438 [Pipeline] // stage 00:25:07.444 [Pipeline] } 00:25:07.461 [Pipeline] // catchError 00:25:07.472 [Pipeline] stage 00:25:07.474 [Pipeline] { (Stop VM) 00:25:07.489 [Pipeline] sh 00:25:07.768 + vagrant halt 00:25:11.056 ==> default: Halting domain... 00:25:17.621 [Pipeline] sh 00:25:17.899 + vagrant destroy -f 00:25:22.082 ==> default: Removing domain... 00:25:22.094 [Pipeline] sh 00:25:22.376 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:25:22.385 [Pipeline] } 00:25:22.403 [Pipeline] // stage 00:25:22.409 [Pipeline] } 00:25:22.426 [Pipeline] // dir 00:25:22.432 [Pipeline] } 00:25:22.445 [Pipeline] // wrap 00:25:22.451 [Pipeline] } 00:25:22.467 [Pipeline] // catchError 00:25:22.476 [Pipeline] stage 00:25:22.478 [Pipeline] { (Epilogue) 00:25:22.494 [Pipeline] sh 00:25:22.775 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:29.347 [Pipeline] catchError 00:25:29.349 [Pipeline] { 00:25:29.364 [Pipeline] sh 00:25:29.644 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:29.910 Artifacts sizes are good 00:25:29.919 [Pipeline] } 00:25:29.936 [Pipeline] // catchError 00:25:29.948 [Pipeline] archiveArtifacts 00:25:29.955 Archiving artifacts 00:25:30.126 [Pipeline] cleanWs 00:25:30.138 [WS-CLEANUP] Deleting project workspace... 00:25:30.138 [WS-CLEANUP] Deferred wipeout is used... 00:25:30.145 [WS-CLEANUP] done 00:25:30.147 [Pipeline] } 00:25:30.166 [Pipeline] // stage 00:25:30.172 [Pipeline] } 00:25:30.190 [Pipeline] // node 00:25:30.195 [Pipeline] End of Pipeline 00:25:30.234 Finished: SUCCESS